More and more these days, cyberspace is encroaching on real space: our data storage and processing capabilities are moving into the cloud, and the availability of smartphones and data plans at almost all price points means that the Internet is present anytime, anywhere.
Yet this level of pervasiveness of a shared digital world is just a glimpse of things to come, as we can see from the way things are going: close-to-market devices like Google Glass and the Oculus Rift promise different levels of immersion in the virtual world.
Image source oculusvr.com
Google’s product touts a constant connection to the Internet while going about your daily activities while the virtual reality headgear of Oculus Rift promises an ultimate gaming experience. Even more futuristic experimental technologies like cybernetic contact lenses and retinal implants will combine these capabilities in what will eventually be the ultimate in visually immersive experiences and Internet connection. While these may still be some years away from becoming a reality, the fact remains that the basic technologies are already existing, our knowledge of neurology and our skills in miniaturization are both advancing rapidly, and the question is no longer of whether these developments will become a reality but of when.
As groundbreaking as these advances are, they’re being matched by improvements in interface technology: one upcoming device, the Myo Gesture-Control Armband, promises touchless gesture control applicable to any number of compatible devices. It’s able to do this by interpreting the combined input of an array of muscle activity and motion sensors.
Yet while this approach brings us to the futuristic interface predicted by movies like Minority Report, some devices on the market are going further: they’re interpreting signals from the brain itself, allowing for a direct brain interface. While the current depth of interaction is admittedly crude and these devices are mostly used for games, this will not always be the case. In fact, technologies are currently being tested for medical applications that will allow people to control robotic limbs with their minds.
While initially this technology will be expensive and restricted to people with limited mobility due to paralysis or amputation, experience suggests that the technology will not only become cheaper, but its use will expand from purely medical purposes to encompass any individual who wants to directly interface with electronics.
These developments will of course not be universally welcomed. While helping the blind to see and the paralyzed to move are laudable aims, the day it becomes commonplace to see humans interface with machines so intimately is also the day we need to redefine what it means to be human.
Constant and in-depth exposure to virtual reality also brings up Matrix-like scenarios of virtual reality addicts plugging themselves in and zoning out, and from a pragmatic angle, we can’t help but think of the number of people who as it is already exhibit some level of dissociation from reality.
Where, then, does this bring us? Can we even try to stop the tide of the fast-approaching future, and should we? Or should we embrace our increased interconnectedness, take advantage of the increased strength, mobility, memory, and computing power that cybernetic technologies promise us, and enjoy a world made richer by being seen through techno-colored glasses? Should we redefine what it means to be a human being? Are we comfortable with the thought of ever-increasing dependence on machines? One thing is sure: we need to start asking these questions now and try to come up with the answers soon, because the future is coming.
Brandon Peters is an entrepreneur, techie, and writer. He enjoys following cutting-edge technological developments.