Filed under Old

Musings on the Future of Home Computing

Researchers at the University of Cambridge, UK, recently demonstrated printing (transparent AND flexible) graphene-based thin-film transistors with a modified ink jet printer. ("Ink-Jet Printed Graphene Electronics" at arXiv)

So what does this mean? In the future, you can download a chip design from the Internet, modify it as required, and fabricate it in your garage with a kind of an inkjet. You can essentially build an entire system by printing the sheets and then combining them with suitable cables and connectors. Maybe the result won't beat an Intel Core i7 in speed, but it will be a treasure trove for hobbyists and professionals worldwide - think today's Arduino-hacking innovators supercharged.

Longer term effects: production and innovation in computing technology manufacturing moving one step below from corporate labs and fabrication plants to homes and hackerspaces. This translates to faster turnaround times: no need to build elaborate marketing campaigns and align release times with Christmas sales, building even 2 prototype chips is feasible, the whole world's experts are available, and so on. (Having free and open (as in speech and beer) hardware will be a major factor in this development - one could close the hardware off but the development convenience would suffer and speed would slow down as a result.) The application areas will also move beyond just "cool, I just printed a tiny logic circuit" to "cool, I just printed an ARM core" and beyond. Once this speed of innovation is applied to neighbouring areas such as wireless communications technologies, then we will truly see some interesting developments.

Will this destroy massive corporate R&D projects? No, I don't think so, there won't be interference until a lot of time passes. It takes expensive and complex equipment to research and develop a memristor, for example. But the speed of remixing existing technology and improving it will increase. Also, the distribution of technology will move beyond the shackles of "the market is just 10k people, forget about it". As a summary: self-fabbing printed circuits will take care of evolutionary paths, corporate R&D of big revolutions, and meanwhile the long tail will become flatter and longer.

Principles of Ubiquitous Computing

Here's a presentation I made at the 15th Summer School of Telecommunications in 2006. The subject is "Principles of Ubiquitous Computing".

In retrospect, there are some notes to be made. Back then when I was reading the available literature and research, there was a kind of concensus that the peer-to-peer model of communication - device-to-device communication without intermediaries - would play a big role, as this would let the device deployments scale without requiring new or existing static network infrastructure. However, the bulk of the ubiquitous computing devices of today (sensors, smart phones, electrical consumption readers, etc.) rely on static communications infrastructure to function.

Also, the "Spam/Big Brother Society" is as relevant a danger as then. As I see it, the danger has merely evolved and is even more extensive today.

Today, more and more information about private individuals are collected with the justification of "with the information, we can show you more relevant advertisements". The infrastructure of knowing who you are, what you think and who you know is in place to learn what stuff or services we might be currently missing.

At the moment the Spam Society is very benign. However, once this infrastructure and data is in place, it can be hard to remove it or to escape its reach, or to prevent it from transforming into a Big Brother Society. Even if one were to vanish as the target of the data collection today, the previously obtained information would still contain a lot of data that could be misused.

For example, what can happen if a political party with a violent agenda takes power, one way or another? If your profile indicates you have been thinking wrong thoughts, instead of getting advertisements, you would get night-time visitors taking you for a long car ride that culminates in a neck-shot in the woods. Interestingly enough, there is prior art in this kind of horror scenario: the Nazi government used census data which they data mined with IBM's help to weed out people with Jewish ancestry.

As for the current state of ubiquitous computing devices, the smart phone stands as a lone king. It helps people organize their lives, entertains them, helps them keep connected with others, helps them document their lives with photographs and videos, and so on.

Although not quite as invisible as Weiser envisioned it, for those who have one, the smart phone is always present, ready to serve - and with modern UIs, it tries to not get in the way too much. I'd say at the moment the smart phone is closest to Weiser's vision of calm technology. Also, over time, the smart phone has gotten only better and I expect this trend to continue.

Generally, a big downside I see with all current smart phones is the level of trust that needs to be placed on the maintainers and owners of the smartphone ecosystem to not abuse the data they collect (the location data, contact data, calendar data, etc.).

For example, Google backs up your WLAN passwords if you enable the Backup My Data option. It's convenient in case you lose your phone, but do you know who in the end has access to the data and what they do with it? If you disable the option, the data is said to be removed. Fine; now, how will you know this to be true? You can't know this, there is no way to check, so you just have to have trust. There are technical ways to remove the reliance on trust (e.g. encrypt the backup locally with a user-given key and then upload it), but at the moment such techniques are not used.

That said, I am a happy user of an Android smart phone. Android is open enough and the phone hardware it runs on is documented enough to let a community of enthusiasts make their own aftermarket firmware. Therefore, if I ever become unhappy with the stock Android, I can always install Cyanogenmod.

The Venusian Emperor

Note: the idea of 'ashen light' is IMO a beautiful example of cognitive biases which hinder objective reasoning.

From the Wikipedia page for Ashen light:
"Ashen light is a subtle glow that is seen from the night side of the planet Venus."

"Before the development of more powerful telescopes, early astronomer Franz von Gruithuisen [March 19, 1774 – June 21, 1852] believed that Ashen light was from the fires from celebration of a new Venusian emperor, and later believed that it was the inhabitants burning vegetation to make room for farmland."

Nice theories, don't you think? It was likely the best speculation of its time, but just consider how human culture centric those thoughts really were.

  1. "fires from celebration of a new Venusian emperor" = Venus has an emperor - implying a hierarchical society - celebrations are conducted on a primitive fashion through the lighting of massive planet-wide fires.
  2. "inhabitants burning vegetation to make room for farmland" = there is a lot of vegetation on Venus, enough so that it needs to be burnt on a massive scale in order to conduct agriculture.

Basically these thoughts mirrored the current sociological-technological-philosophical environment surrounding von Gruithuisen and were projected into an alien environment. The tacit assumption seemed to be that the sociological-technological-philosophical status quo where von Gruithuisen was living in at the time (hierarchical society, all hail the leader, dependence on agriculture, etc.) was the most natural state of things and therefore it was reasonable to think that alien places, even civilizations on other planets, would follow this model.

Let's extend these thoughts to SETI. What are we trying to do with SETI? We're trying to pick up (radio) signals of an alien civilization.

Not withstanding arguments on the necessarily narrow time window when we could pick up anything in the first place, notice that there are many assumptions we're making: radio signals are used, the "water hole" is preferred, potential willingness for the alien race to actively attempt contact, supposition that we could detect and distinguish an artificial signal from a natural one, etc.

What if we're going the way of Baron von Gruithuisen here? What if what we're trying to find is simply so alien, that we cannot comprehend it based on our sociological/technological/philosophical tradition and background? What if we're trying to find traces of a hive mind of superintelligent translucent slime who thinks since they communicate by clanging on pipes of ice under liquid methane oceans, others must surely do the same?

My questions are not meant to imply that SETI is a waste of time and that we should stop it - on the contrary!

What I am saying is we should remember Baron von Gruithuisen, and try to think outside of the box, through de-assumptionizing (does this word win the Scrabble?) and re-thinking the "model of the space alien", since we really have no good reason to assume anything specific in that area.

Pseudorandom Blast from the Past

A friend was messing around with pseudo-random number generators, so I dug this thing up from the historical archives for him.

It's a PRNG implementation done while I was a university student.

The main idea is simple: run multiple Galois-configuration m-sequence LFSRs in parallel, combining their outputs with XOR, thus making an LFSR with a longer sequence than would be possible by running an LCG in the given machine architecture. This is possible because the m-sequences are all co-prime (the length in bits of each individual component LFSR is a Mersenne exponent).

There are several defects: it's not cryptographically secure, nor does it pass Marsaglia's Diehard test battery nor the more modern Dieharder. The LSB is always even. However, it still might be "good enough" for some kind of embedded use, or to be used as a starting point for something better.

The example implementation has a period of about 2\^93, using eight 32-bit values as the internal state and eight bit-masks. The implementation is also quite simple.

Grab it here: lfsr-prng.tar.gz.

The implementation consists of two parts; prng.c contains the actual PRNG, and prngdriver.c exercises the PRNG to output values in some format. As an example it now outputs things in Dieharder-format.

The description from prng.c using my own words (I removed my old mail address):

/* Simple PRNG using a linear feedback shift register (LFSR)  
 * in Galois configuration.  
 * A primitive polynomial modulo 2 is used for LFSR taps.  
 * Thus each LFSR outputs an m-sequence. The length (in bits)  
 * of each LFSR is a Mersenne exponent. Thus the length of  
 * the m-sequence is a prime number (2^m-1).  
 * Since the individual periods are relatively prime, the output  
 * period length of LFSR generator combination is the product  
 * of individual LFSR period lengths:  
 * (2^2-1) * (2^3-1) * (2^5-1) * (2^7-1) * (2^13-1) * (2^17-1)  
 * * (2^19-1) * (2^31-1) = about 2^96.34  
 * This PRNG was intended for use with machines without /dev/random  
 * (eg. old DOS boxes found from parent's place etc.)  
 * NOTE: This generator is not good for cryptographical applications!  

Have fun with it!

How Do You Blank Your Screen?

(From a question about screen savers to musings about problems in adopting and providing radically new technology and technological improvements.)

A friend did a poll and asked "which of you use screen savers?". So far there's 4 replies and no-one has fancy animated screensavers, everyone just blanks the screen after a time. Does anyone really use animated screen savers nowadays?

I don't see a point for active animated screensavers, because:

  • Screensavers kick in when computer is unused for some time. Most likely there is no-one around to see the screensaver.
  • The ye olde miniature particle accelerators aka. CRT monitors needed a "screen saver" to prevent burn-patterns, but this is not so on modern LCD/TFT screens.
  • Active screensavers consume electricity. Electricity costs money. Thus you pay for things no-one sees, for no additional utility value.

Note: I can understand something like SETI@Home, Folding@Home and other BOINC-style projects as "screen savers", but those are not really screensavers, they are programs which run when the computer is idle from other work. There just happens to be a fancy visualization to show off what is happening. Conceptually, for a normal user, the software is easier to understand as a "screensaver".

What interested me in this question was the thought that followed afterwards: the world is big, so there are still surely people using animated screen savers. Fair enough; maybe they are around the computer but not using it, and/or think it's pretty, whatever.

Or perhaps they never even considered it. Something was necessary in the past (prevent picture burn-in), things changed (technological developments), but yet some people insist on doing things the way they've always done.

Is it because technology is sufficiently "similar" so people don't feel the need to reconsider the existing patterns of use? A screen is a screen, even if it's flatter, brighter and has a crispier image, and since I've always used the screen saver with a screen, I'll just keep on using it. Investing time and effort to re-think the patterns of use is a trade-off I as a user don't want to make.

This kind of thinking exists in many places, for example, car user interfaces: you've got a hole for the key, you've got a steering wheel, you've got pedals and you've got a gear stick. And this has been more or less unchanged for over 60 years (!). Talk about a stagnated industry. Why not just have something like a joystick with acceleration/deceleration/direction all in one control? The car would be potentially easier to drive, releasing more cognitive capabilities to observing the signs and other traffic and that way making driving safer.

Some of this sticking to the past when it comes to technology improvement is driven by risk-averseness of companies who manufacture the products. They don't want to bring something too disruptive to the market, for a perceived fear of customer rejection of the "alien technology", even if this fear would not be factual. Sometimes excessive cost is quoted - we won't see joysticks in cars because it costs so-and-so much and requires this-and-that. Well, new technology always costs a lot until competition and new technological improvements bring the price down. The thing is, this development will never happen until someone takes the first step.

Naturally, some part of this issue lies on the customers and users themselves. If people just care about the utilitarian aspects of the technology (car takes me from A to B, screen shows me computer stuff) there is little desire or need to learn new things. Even though these new things might save money/time/effort and make the use more efficient and overall make life easier.

The question I want to ask now is: to maximize the benefit of new technology, will the technology have to be sufficiently dissimilar so that people won't get stuck in the old (possibly detrimental) ways of using it, to force people to re-think their usage patterns? And if this is the case, then:

  1. How to drive adoption (user acceptance) if the technology is too different?
  2. How to ensure manufacturers (and not just startups with "nothing to lose") will be bolder and step out of their comfort zones?