What if a computer could randomly generate a picture of you without the assistance of any imaging devices?
Imagine a program that generated 320x200 images using an algorithm that cycled each and every one of the 64,000 pixels through every possible combination of colors. Assuming this program generated images in "true color" (16 million, or 16,581,375 colors to be exact) that equates to 16581375^64000 possible combinations; a grand total of 5.1837412988916274918068450801151202945517115770777033284391945582537859635287994913725560608774083366*10^2055 images! And that's the short answer (calculations were done with a program called Haxial Calculator). Obviously this is an unimaginably large integer. The amount of time and resources that would go into producing this many images would be incredibly astronomical. On top of that, the amount of time it would take to sort through each image and keep only the ones that produced even a remotely distinguishable image other than noise would be exponentially larger, considering the images would need to be inspected by humans personally.
This general concept is nothing new to science. The idea of giving typewriters to monkeys and letting them randomly hit keys on them for an indefinite amount of time until the works of Shakespeare are inevitably reproduced is one that has been revisited numerous times in the last century. The same basic concept also applies here, but in this case our monkeys are random-image-generators and our Shakespeare is a photo-realistic image of any living human being.
The obvious implications are that this task is nothing more than an exercise in futility. There aren't enough computers or time left in our lonely corner of the universe (our meager sun would likely die before a recognizable image was produced, assuming humanity lived that long). Probability theory tells us that there is a chance that the first image produced, or even one within your lifetime, will fit the criteria perfectly. Albeit, you have a much greater chance of suddenly being swallowed by a quantum singularity on your way to work tomorrow. So what, then, is the point of all this if this is such an impossible task? I'm glad you asked. I'm not going to tell you that I have solved the riddle. No, but I think I may have some theories of my own that may one day help someone else achieve this goal.
Problem number one is that there are just way too many possible combinations of pixels. It should be apparent that the vast majority of these images produced will just be junk, or 'noise'. So, in theory if a clever programmer were to come up with a image-analyzer that could filter out pure noise images automatically, they could probably cut that insanely larger number in half in one fell swoop.
The second and arguably the hardest problem is that half of 5.1837412988916274918068450801151202945517115770777033284391945582537859635287994913725560608774083366*10^2055 is still a ridiculously large number and it would still be incredibly impractical to attempt to filter through so many images. It certainly wouldn't happen in your lifetime. This is a much harder nut to crack because the solution could never be as easy as throwing out the images that are purely noise. Most of the images in the 'good' half left over would, at the very least, have some sort of recognizable pattern to them. So we are still left with an overwhelming amount of potentially pretty but otherwise completely useless images. The first place to look for answers is obvious: in the code. Again, if a clever enough programmer could take just some of the random out of the equation we would be well on our way to reducing our incredibly large number of possible images to something realistically manageable.
In this case, I believe the best candidate solution may be found buried somewhere within cellular automaton theory. I'm not going to regurgitate a complete explanation of what cellular automaton is (see Wikipedia), but to put it simply it's a grid of cells (or pixels) where each one exists in a predefined state. Incredibly complex patterns emerge from very small equations or rules applied to the grid. So, in theory, if a program applied this technology to either the generation or the inspection of these images, it could be used to pluck out images containing certain desirable features. Therefore making the process much more manageable by specifying what types of things to include, rather than what looks interesting. In other words, instead of generating an unbelievably large database of images for review, generate a much smaller database of pattern equations from which images can be generated that are already very close to what we would want to see in the images. It sounds a lot like we're telling the computer to generate an exact image to our specifications and calling it 'randomly generated', but be careful with that assumption--this would be much different from that, in that it's more of a guided randomization. What would really be happening is we would be telling the computer what its limits of creativity are and then letting it have free reign within those boundaries. So that even if we're telling the computer to draw us a picture of a kitten, we wouldn't know exactly what the end result of this image of a kitten would look like.
So ponder that for a while. This is something I might delve more into another time, but this should be enough to whet your palate until then. I would love to hear your thoughts and comments!