In academia, advancements can sometimes be large leaps and very bold, depending on proof. Sometimes though, I think that the best improvements come in increments as they’re tested and spread out to everyone. Then someone gets an idea and it can be shook up again, become affordable and is taken up en mass.
I was looking at some old photographs, I forget where, of an office and it was full of rows upon rows of desks. Piles of paper stacked high and people sat at them with pen and ink, passing the paper around.
Move forward in time, the paper is reduced, typewriters take their place, calculators appear and the desk number is lessened.
Again another jump, fewer desks again and we have computer systems, still some paper (and a lot more post-it notes).
I wonder if anyone considers the next step on this? Perhaps the advancement of technology with video games entertainment can provide some insight, where the need to communicate with friends and team-mates regularly have produced interesting mixtures of text and voice based chat systems, with some video.
From what I have observed though, video chat takes a back-seat unless there is a real-world ‘meat space’ object that needs to be shown. If the item is something created in the ‘metaverse’ of the video game space then there is little to no requirement of actually seeing the person behind the virtual entity.
I had an idea. What if, instead of a monitor, keyboard and mouse, you used your body and voice, perhaps even your mind to make natural interactions with your surroundings which could then be anything from anywhere?
In fact, perhaps there could be entities which are not even really there. I thought that a combination of existing hardware could do this, there’s Microsoft Kinect, Vuzix Eyewear and the OCZ Neural Impuse Actuator.
As I’ve said before, with comments about the ‘meta verse’ and technology such as Google’s Project Glass. I think that some others are becoming aware of this kind of idea. Including Microsoft with their ‘Kinect Glasses’ leaked concept.
This is not so strange, people coming up with similar or the same ideas, at the same time. I think that it happened with the television and the telephone. Amusing that it could be happening with another method of communication, or at least, interaction.
Perhaps there’s a natural evolution to this?
These alternative, virtual, reality, 3D methods of interacting with the ‘meta-verse’ or ‘computer world’ or ‘another earth’ are fantastical, possible, realistic.
How does a blind person interact with it?
In the days when I mainly used my Nintendo Entertainment System, then the Commodore Amiga 1200 it consisted of sitting on the floor by the television. This quickly evolved into having my own, smaller, television where I then sat at a desk with the Amiga 1200.
After many years I’ve been prone to the typical pains that blight many users, lower back pain, repetitive strain injury in the wrists, neck, headaches, etc. I think to myself:
“Wouldn’t it be nicer, if using a computer felt more natural to how we interact with everyday objects. Sure, using a mouse or a game-pad can be a fast way to use items but is it not possible for me to walk, to sit, to just think and control a computer?”
I believe it is. Devices such as the OCZ Neural Impulse Actuator have came along, while not believed by many and picked up by few. This is a step in a good direction.
Not only this, though, for I also dislike computer monitors. In fact, they are such a hazard that there is written UK law on the correct installation and use of them in the workplace. So how, then, about using Vuzix Eyewear ? Or how about, Google’s Project Glass ? It’s not entirely science-fiction either as some people have attempted to put it together.
Still, this is not quite full interaction with a computer system. Relying purely on voice. It feels to me that it requires something else, something tangible to interact with. How far away are we, from the type of interaction you can get from Heavy Rain’s ‘ARI’ ?
How would such a system work? Is it possible that a ‘virtual reality’ system such as this could actually interact with the ‘real world’ ? How would it map it?
With technology such as Microsoft’s Kinect that can map an area in 3D, perhaps if it could be made in such a smaller scale or even implemented into standard rooms (or perhaps into something being worn so that it is done as it is viewed) then this can be presented and interacted with as required via some form of eye-wear or, as some are even calling it, ‘wearable computing‘ which it appears, is not an entirely new concept but is being discussed.
Still, we are a little away from the technology depicted within fiction and while the current solutions do not entirely take into account control from the mind, I firmly believe that it is possible to get there and I would love to be able to work with it.