20110529

extra-solar navigation

I just read an online discussion of StarGate navigational mechanisms, specifically the inconsistency of having a six-point destination coordinate and a single point of origin. The conclusion was that it's basically silly, unworkable, and functions only as a plot device. It made me think, though, about the underlying issue.

This is not an uncommon subject in science fiction. There are many, many stories which revolve around navigational difficulties, finding and obtaining star charts, or getting lost in space and trying to find some known quadrant in order to go home. Some of that is simply updating the traditional lost at sea tales of our pre-satellite navigation past, but some of it is a real issue.

Obviously, while we're still within the Solar System, we won't have too much trouble. Fairly standard telescopes and computation will tell us where each of the known atronomical bodies - including Earth - will be at any given point in time. But what happens outside our star system? What happens when we're far enough away from our star that the constellations no longer look familiar?

Will each ship's computer need to be able to model in four dimensions (time being a significant factor in these calculations, especially if we have to consider relativistic time-shift, or long periods such as would be experienced by generation ships or ships whose crew were in some form of induced hibernation) to location of every star in the galaxy? Every star within the home galaxy and every star within the current galaxy, as well as the relative location of every galaxy in the universe..? That is a computationally massive task.

How else could it be done? Are we doomed to the science fictional future of lost maps leading to lost star systems and worlds?

20110217

long term human suspended animation

Otherwise known as Cryogenic Suspension, Cryopreservation, or simply as Cryonics, cold is the only known way of reliably inducing a state of stable suspended animation in animal tissues (bearing in mind that 'suspended animation' may be a misnomer if the tissue we're talking about is a steak, but is still correct if we mean a transplantable heart, liver, or tooth). Cool the cells down to the point where molecular and cellular degradation occurs very, very slowly, and the theory is that you can bring someone back. It's a little like the people who fall into freezing lakes and go into hypothermic shock, effectively drowned for minutes at a time, and are then revived - but vastly more complicated.

The main complication is that the human body is approximately 70% water. When water freezes, it forms ice crystals, drawing all the liquid out of the cells to form crystals in the intracellular space. This dehydrates the cells themselves until they are damaged beyond the point of recovery. That's why if you freeze vegetables, and then thaw them out, you usually get that slightly funky thawing liquid coming out as well. The ice crystals inbetween the cells melt into water, but the cells in the vegetables have been damaged too badly to reabsorb the water effectively.

The idea that ice crystals form within cells and burst the cell membranes is a myth, by the way.

When we freeze cells, such as sperm and ova, or very small cell clusters such as zygotes, we get around the ice crystal problem by vibrating the cells as we lower the temperature. This inhibits ice crystal formation, and limits the cell damage. Or, in some cases, we snap-freeze the cells, lowering the temperature so quickly that there is no time for ice crystals to form, and the cells freeze solid.

Neither technique works for entire human bodies or even for smallish organs, never mind entire brains. If that large a cluster of cells is cooled too quickly, the outer layers get significantly colder before the inner layers have caught up, and uneven shrinkage causes problems such as peeling of layers of tissue. Vibrating an entire large cell cluster, such as an organ or entire organism, is impractical using current technology, and may cause tissue damage in itself. Instead, to put a person into that sort of suspended state, we replace the blood with cryoprotectant chemicals - basically biological antifreeze - and cool the body relatively slowly to the desired temperature. The cryoprotectant chemicals prevent ice crystals from forming, and instead the entire cell cluster is vitrified - turned into a very cold, stable, noncrystalline solid with many of the properties of a liquid. This vitrified tissue must be kept within a specific temperature range; too warm, and the tissue is not effectively preserved, too cold and there is a danger of cracking even if no ice crystals have formed.

The problem with that approach, aside form the danger of cracking at very low temperatures, is that most of the known cryoprotectant chemicals are also toxic. The percentage of cryoprotectants in the cell cluster must be carefully controlled and limited during the cooling process to avoid cell and tissue damage, and even more carefully controlled during the thawing process, as the cryoprotectant chemicals are removed form the cell cluster and blood is re-profused through the tissue. Until now, this chemically mediated vitrification process was the most advanced known technique for inducing long term human suspended animation.

However. As of last year (2010) there is a new process being researched. It was developed by Japanese food scientists for preserving sashimi, but has been experimentally applied to the cryonic preservation of teeth (a difficult process, as teeth usually lose viability in a matter of hours) and is exciting a great deal of interest in the potential for cryopreservation of transplantable organs. The basic idea is that an oscillating magnetic field is used to 'vibrate' the tissue during the cooling process, preventing water molecules from aligning to form ice crystals. Once a certain temperature is reached, the water molecules no longer have sufficient energy to line up, and are unable to form ice crystals. The tissue is effectively vitrified without the use of cryprotectant chemicals.

To me, this is a very exciting development. Not only does it show real promise for cryonic preservation and revival of entire organisms (essential for any long term space travel or colonisation mission - who wants chickens, horses, or cattle running around underfoot in the space ship? But on the other end of the trip we'd want them), but it has a significant impact on my intention to stay alive for a very long time. We can already print some organs, and there is very promising research into printing more complex organs using adult stem cells and cartilage frameworks. If I could simply have a spare heart, liver, and a couple've kidneys printed and then stored indefinately in case I need them, that would vastly improve my chances of sticking around in case something terrible (accident, illness, ...) happens. And the potential to suspend and revive entire organisms is a nice little insurance policy in case transplanting new, fresh copies of my own organs isn't going to cut it.

The plan: build a miniature version of the magnetic freezer, and carry out some experiments.

20110203

complexity of now

We see the future the way we see the present - linear extrapolation. Humans are very good at linear extrapolation. We're not so good at comprehending exponential growth.

We're also not very good at imagining the future being very much like the now. It's always utopias (Star Trek, ) and dystopias (Transmetropolitan, V For Vendetta, The Handmaid's Tale, ...). What people don't seem to be very good at understanding is that things change massively quickly, but they do so mostly in the background. Star Trek technology looks like magic to us, but to the characters it's commonplace. LCD TVs and smartphones are commonplace to us, but even a generation ago they would have seemed marvellous, and 100 years ago they would have seemed like magic.

We can look back at the last century, and see astounding changes in technology, but we missed most of them as they happened. The big, obvious changes, sure - people noticed the television, the personal computer, the mobile telephone. But who noticed the vast but creeping changes to video displays once we had them? Who noticed the increasing power of satellite telescopes, and the wonder of seeing stars millions of light years away? Things have been getting better by increments, we've been learning more and more by increments, and our global culture has changed by increments to accommodate and surround those changes. But it's very easy to miss that happening.

Compared to 300 years ago, we do live in an utopia. And equally, we live in a dystopia. And compared to life now, life 300 years ago was an utopia - and a dystopia. Everything is relative; it depends what you expect, and what you focus on, and what you value. Real life is more complex than you think.

20110108

paradigm shift

People don't often ask me what I mean by 'the singularity' when I mention it; instead, they get this confused expression and say nothing. After I've clarified that I'm not talking about a black hole or about the theoretical start-point of the universe-as-we-know-it, they usually continue to look confused.

Words may well be the most powerful drug known to humankind, but they don't always make things easy. Try communicating with someone with whom you share no common language, or with someone in a profession which makes heavy use of area-specific jargon (computer programmers, for example), and you'll see exactly what I mean. Words are only useful when the meaning is shared between the people attempting to use those words and meanings to facilitate communication.

'Singularity' is a great word. It draws the mind to images of stars and galactic vistas, handily coloured for artistic appreciation by NASA's tireless scientists. It implies a beginning, and one of vast significance, equivalent to the birth of the universe itself. It allows us the warm and fuzzy subconscious certainty that we are all in this game together, that the spheres are aligning and some new utopian age will be here any second now. But it doesn't actually mean anything in the context in which we're using it. We might just as well call the singularity "the pyramid". We'd get the same blank looks form those not in the know.

What we should call it, what we mean when we refer to it, is a paradigm shift.

I know the official line: the singularity is the point in time at which strong AI comes into being. When artificial (man-made) intelligences emerge which are capable of self-improvement, which necessitates that they engage in self-improvement until they become weakly godlike beings superior to us humans in understanding, intellect, knowledge, and information processing power.

My usual description, and one which I believe to more correctly capture the idea of the singularity, is that it is the point in time at which the rate of technological change - visualised as an exponential curve of change vs time - is so large that unaugmented human beings are incapable of predicting further changes or consequences. That is, when the exponential curve approaches vertical.



But either one of those definitions, if followed through, boils down to paradigm shift. Not purely within the hard sciences, as in the original sense of the term, but applied to all of human culture and society. A change in basic assumptions, a crisis brought on by new information - factual anomalies which refute the existing 'truth' and require a new truth, a new mode of thought and action to be created and implemented.

The singularity is not unique, though the term may lead one to assume that it is, or should be. Paradigm shifts have occurred within society many times, as a result of technology or social innovation. Like puberty for a mammal, paradigm shift is a time of change and sudden growth for a species. Species such as our own have the dubious pleasure of entering the tumult of puberty and adolescence more than once, of learning to adjust to the changes we are undergoing.

20100619

desktop assistant

LuminAR is a new data manipulation and interaction device combining a desktop robot, table lamp, and personal research assistant. It is designed to dynamically augment its environment with media and information, while seamlessly connecting with electronic devices such as laptops and smartphones.

Now ask yourself - is that a robotic assistant? An alternative computer / data interface? A human peripheral prosthetic? ...All of the above?

(via Inhabitat)

20091218

science fiction

My parents read science fiction, so I was introduced to it at a young age. I remember my father reading Piers Anthony's Blue Adept to me when I was seven - and even though that is as much fantasy as SF, it gives you the idea.

Science fiction inspires and is inspired by real science and technology. Many inventions predicted by SF writers have emerged or are emerging into the real world - everything from lasers to online avatars. Trouble is, a lot of those predictions were made in the late 19th and early to mid 20th Centuries. Lately.. there doesn't seem to be a lot of it happening.

PC Pro (UK) has an article which basically asks if science fiction as a genre has run out of steam. Although there is still good SF being written, much of it is either space opera or otherwise very much social SF, dealing with ideas such as first contact.

I don't know if it's true; there are other factors at play here. It is increasingly difficult for new authors to get published, due to the lack of small publishing houses which are willing to take a risk on a first novel in what is still viewed as a niche or non-literary genre. There are fewer magazines which will publish SF short stories. But if it is true, if there are fewer innovative ideas coming out of science fiction writing, think about what that means.

Either our culture and technology are entering a period of decadence and and decline - or it's just getting harder to make useful predictions that won't be out of date by the time the novel is finished and goes to print a year later.

In other words things are just changing so fast that writers don't have time to keep up and imagine fourteen steps ahead.

20091205

Memory

We can talk about human enhancements and memory/cognition upgrades as if they're decades away, but it just isn't true. They're happening now. They're called the internet.

Faustus, over at ErosBlog, writes:
it’s the glorious age of the Internet, which means that with a certain amount of head-scratching and Google searching and perhaps a small outlay of cash, you can sharpen up strange old memories. And so I did. The clip indeed exists.
I read that, and I thought - well, yes. The truism that Google knows everything, or more accurately that the interweb knows everything, is actually not far off. I remember something, vaguely.. so I look it up online. Example: A music video to a song which was popular for all of a week sometime while I was in high school, and all I can remember is a few images from the video (no artist, no lyrics, not even the melody). Yep. The interweb knows what it's called, what year it came out, who it's by, and can even supply a copy of the clip.

I often joke about the internet + my computer being extra parts of my brain, but you know.. it isn't really a joke.