Anyone out there?
I started thinking about an apple (read my first post for background information), and ended up spending a couple of hours thinking whether we were really opening our ears to an extra-terrestrial signal.
Well, the stray thoughts that just preceded the thought about this were about random numbers, and how I read someplace that for ‘quasi’ random numbers, if you waited long enough for enough numbers, you would gradually see a pattern. Of course, this isn’t true for ‘real’ random numbers.
I shall start my explanation with a few examples to give you focus (and to get your brain gears to start churning!)
Here is an image showing the plot of the rand() function of the ‘C’ programming language. If the function truly gave you random numbers, the plot should have looked more like the (negative of) white noise you see on a television channel with no feed, or the vision you see if you poked both your eyes real hard (you know, you shouldn’t do that!)
The pattern we see in the plot is pretty obvious. You can easily extrapolate the results in your mind and imagine the plot turning into parallel lines by the time you have 10k of numbers to plot. But the question that eats my brain is : would you have the slightest inclination to guess about a ‘pattern’ if you had only seen the plot for the first (say) 50 random numbers?
Probably not. The first few numbers would obviously seem very random. Even the first 20-30 numbers. Soon, a trend might start appearing, and only then would you realize that there is a pattern in that sequence of ‘random numbers’.
This example was meant to explain, what appears ‘random’ in small amounts of data, could in fact turn up a pretty obvious pattern soon.
It’s just a 8-bit binary representation of ‘We are here’. (I used this site for the conversion)
Imagine an intelligent civilization out there sending us this message (and for sanity’s sake, let’s assume they speak English, and that they also use 8-bit binary representation). What if our observatory or satellite was pointing towards them for a short time and we only picked up the following bold bits only.
We would so wrongly decode it to ‘vR&R†W&’ and assume this gibberish is just random.
This example clearly shows how we could make a mistake of looking at a small part of the signal, and wrongly assuming it was something random.
This time, consider that maybe we did pick up the whole signal, but since it kept repeating, we wrongly assumed the starting bit. So we decode this signal assuming the bold bit to be the first bit.
And we get another gibberish message ‘ÙH™H™U’ which we again assume is random data.
This example shows it is important to understand (or locate) the start of the signal for it to make sense.
Suppose we get this following signal from space one fine day.
This time, by stroke of luck, we get the whole signal, and also correctly identify (*cough* guess) the start bit. We use 8-bit binary representation and decode the signal to ‘#Eg‰’ and discarded the signal as random again.
But the civilization which sent it, might have actually encoded using 4-bit representation. So what they sent was ‘0123456789’.
This is an example of how just plain encoding standards could affect what we perceive as random noise or intelligent signal.
What if you get a signal like this on another day.
You analyze it, decode it all you like, feed it into a number crunching super-computer, but you just can’t get an ‘intelligent’ message out of it. You would assume it’s random again. You discard it.
Think about this analogy.
There are two people standing on hill tops. One of them uses a flashlight to signal the other. He may use ‘morse’, ‘binary’ or any other encoding scheme. The point is – if the receiving person was technologically thousands of years ahead of the sending person, would he perceive the switching on and off of the light as bits? Probably not.
I expect, he would record the data of photons he received from the first flash and would start decoding that stream. He wouldn’t realize he needs to perceive the whole stream as one single bit! If he collected enough data (minutes worth, instead of milliseconds worth), he may have seen a pattern of flashes. But since he was facing the sender for only a few seconds, all he got was (say) 10k photons, which are distributed randomly, making no sense.
Here, we make the mistake of over-analyzing a signal. Not every one thinks alike, and what could be data of 10k streaming photons for one, could just mean a simple ON bit for another.
Think about it – all these scenarios assume that intelligent life uses English language and it’s binary representation. So even with this ‘best-case’ scenario, we fail to comprehend an intelligent signal as one, and instead perceive it as random before discarding it.
In reality, chances are that ET would be pretty incomprehensible to us. They would probably talk in a language which would sound like random noise even to our ears; they may even communicate telepathically. Why then, would they think about sending a signal in our ‘language’ (whether it be a spoken language like English, or a mathematical language like binary)?
Hey, dolphins are intelligent, so are parrots and crows, even elephants and south-indians. Are we able to make any sense out of their language? I bet these guys behind the $1 b devices can’t decipher malayalam either! So what does that say?
What then are our true chances of coming across an intelligent signal and recognizing it as one?