The impossible becomes likely

Disruptive innovation: an innovation that creates a new (and unexpected) market by applying a different set of values. Chris Fitzsimmons looks at some of the technologies and applications with the potential to radically change the way we use AV technologies in the next five years.

The humble LED started out as a low brightness device, only suitable for indicator lights. Current models are widely used in indoor lighting applications and video displays. Eventually they are likely to fully replace other lamps in outdoor and high brightness applications.

When Clayton M. Christensen first coined the phrase ‘disruptive technologies’ in his 1995 paper ’Disruptive Technologies: Catching the Wave’, he was writing for managing executives making funding and purchasing decisions in major corporations. In further work on the subject he refined the term to ‘disruptive innovations’, recognising that it was rarely technology itself that caused an impact on markets, but the way in which it was applied. A disruptive technology is generally typified by one that starts out life as a niche play in a particular application, before going on to dominate an entire sector.

Since the term was coined, a number of such technological advances have been widely recognised as disruptive to particular markets. Another example is the rise of steam shipping. When it first made an appearance, limitations in the steam engine limited its application to low profit, inshore routes, where sailing vessels were less effective. Steam ships started out life only operating in poor markets, but went on to dominate world trade for decades.

The AV sector is just as sensitive as any other to potential disruption, and this article is an attempt to look at a few existing, and developing technology applications and see how they could influence our industry in the medium term. The key ones I have identified are 3D displays, interactive technologies, cloud computing and machine-to-machine communications. We’ll also give some attention to the latest web technologies in the shape of HTML5 and CSS3.

3D Cinema started it all

Or at least that’s the opinion of Texas Instruments’ John Reder, worldwide education sector development manager. Whilst it may have started in cinema, 3D has the potential to become an unstoppable behemoth of a buzzword. Since the arrival of the movie Avatar in 2009, technology vendors have seized upon it as the next way to get consumers to upgrade their home equipment. The display industry needs 3D to drive new sales.

However, outside of specialist applications such as content creation or postproduction of 3D TV, flat panel displays are unlikely to be the place where we feel the most impact in pro AV.

What we’re seeing already is an upsurge of interest in application areas such as teaching, medicine and visualisation, with systems based mainly on projection.

In the latter two, the professional benefits to gained from 3D imaging already outweigh the costs involved – in industrial design the ability to prototype entirely virtually is taking massive chunks out of product development costs.

Interestingly according to Texas Instrument’s Reder, 3D-ready projectors are not vastly more expensive than their 2D counterparts, at least at the low-medium brightness end of the scale. So, in the classroom, where TI has been recently making a push, the real cost is the 3D glasses themselves.

But given that these applications are seeing growth, what is the likely impact on the supporting technology?

The key one is that bandwidth demand will rocket. Stereoscopic 3D technologies depend on the presentation of two separate images simultaneously or in quick succession, one for each eye. That means twice as many pixels need to be generated and transported in order to maintain the same resolution in 3D. If 3D HD telepresence ever takes off in any meaningful way, expect it to be one of the most processing and bandwidth intensive applications you’ve ever come across.

At this point it seems unlikely that 3D will completely replace 2D. In order for it to become totally ubiquitous, the industry is going to have to come up with a way to make glasses-free 3D displays, viewable from a decent angular range by more than one person at once. However, given the rate of technological development at the moment, it would be rash to discount the possibility.


Can’t touch this

Touch sensitive technologies have made a enormous impact on the pro AV market place since they first arrived. Touch panel interfaces have long been a staple of the control vendors, and it’s now possible to make almost any display or surface into a touch or multitouch interface using any number of overlay or camera-based.

Along with the touch interface, has come a whole new language of finger gestures for performing common commands, which owe much to consumer brands such as Apple and Google for bringing them into our consciousness. Most of us are pretty much at home pinching to zoom, or swiping to move between pages.

However, the next iteration of human machine interaction is potentially even more game changing. I believe the arrival of Microsoft’s Kinect product in the autumn of 2010 is going to have a huge impact on our business. Kinect, for those that don’t know, is a gesture based control system for Microsoft’s Xbox 360 console. The hardware consists of an array of cameras (detecting in the visual and IR ranges) and microphones, and a small infrared emitter that projects a grid of IR light on the wall behind the user(s). The IR sensitive camera detects the way that objects interfere with that IR grid, and is able to detect individuals by their body shape or even the sound of their voices. The other cameras detect light and colour to aid in identification via facial recognition.

Kinect isn’t the first example of a  gesture or shape recognition interface we’ve seen. GestureTek has been championing its application in digital signage for several years.

Gesture-based computing also exists in a number of other applications. Oblong Industries already has an operating environment called g-speak based on it. However up until now they have depended upon additional handheld hardware or gloves to accurately map hand movements.

The arrival of Kinect is going to do for gesture what the iPhone has done for touch. It’s going to take it mainstream. Already Kinect’s drivers have been hacked, and it can hardly be long before authorised software is released allowing these devices, or others like them, to control interfaces and enable users to interact with systems in new ways. There is also talk that the IR camera is currently limited to a relatively low resolution by Microsoft, and that they are able to make it even more sensitive.

The effect of Kinect will hopefully be to raise the general awareness and acceptance of gesture technology, allowing uptake in the applications where it could have most impact –DOOH media, visitor attractions and education as well as data visualisation and simulation.

Liz Berry, a partner in hologram specialists Hologramica has high hopes for Kinect:

“I'm very excited about where this could take us in the world of holograms. We are now at a point where the image quality is so good that we are now looking at behavioural technologies in order to make them appear even more realistic. It's now not about how they look, but what they do.

“At it's most basic, Kinect could allow wireless and otherwise deviceless activation of a choice of virtual images. This means a hologram could be triggered by an audience member pointing at something, instead of selecting from a touchscreen menu or pressing a button to start. That's pretty basic, but we could then allow that person to manipulate the image - pushing it or spinning an object or character around with hand or arm movements.
 
“If we used it in the same way with a presenter on stage, the hologram could be made to respond to natural human movements with graphics or objects moving around a stage in relation to the presenter and animating in response to him.

“What is really interesting however is where we could go with virtual people, who could be programmed to react to even quite subtle movements or gestures from someone on stage with them, making the experience even more realistic through their behaviour. At the moment all of these things are possible, but they are fiddley and expensive. Kinect opens a world in which we can use the same communication tools to interact with virtual people as with real ones, which must be the ultimate goal.”

The low cost of a Kinect unit gives one access to a sophisticated piece of hardware at something well below the current entry level.


Cloud cuckoo land

Unless you’ve been hiding under a rack for the past two years, you’ll have heard the phrase cloud computing.  However, until relatively recently it is probably something you’d have filed under “IT” and having little direct relevance to the AV market other than making it much easier to work remotely.

That will change, if it hasn’t already. Centrally stored, remotely accessible and modifiable information has almost unlimited applications for AV solutions.

The one which already has the most traction is digital signage. With a cloud-based digital signage network, all the administration and scheduling tasks can be performed from any web-browser via an html page interface. New content can be uploaded to an online store from any computer, and systems are made totally scalable since more storage space or processing power requires only an upgrade to a hosting package.

Education is another market where AV vendors should be paying attention to the possibilities offered by cloud computing. The ability for a local education authority to share teaching materials between, as well as within, is already desirable. The need for this will only increase as such materials become more complex, and harder to produce, with rise of technologies such as 3D.

It’s not just the content that be held in the cloud. More heavy lifting and processing work can be performed online, and outputs streamed via the internet to the required endpoint. This will allow increased use of thin-client type technologies and reduce the cost of endpoints and interfaces.

Whilst on the subject of cloud computing, it’s worth having a look at the latest round of web technologies which are once again changing the way we create web pages of all types. HTML5 and CSS3 are the two key technologies in question.

HTML5 is the next major revision of the HTML standard. Hyper Text Markup Language, to give it it’s full name, is the system used by web programmers and browsers to dynamically format the content of a web page. It is used in conjunction with cascading style sheets (CSS) to create the way webpages look and behave.

More importantly for the AV community, it allows various media such as video, audio and images to be embedded in those pages, without the need for compliant browsers to download additional plug-ins such as those required for Flash.

HTML5 also adds support for other useful interface functionality such as drag and drop.

Dave Snipp, CEO of Stardraw.com, is particularly excited by the possibilities offered by Scalable Vector Graphics (SVG). In combination with CSS3 and a clever bit of kit called jQuery, it will allow much more interesting interfaces, and templates to be made available to the integration community online. The range of UI templates, styles and elements you’ll be able to choose from as an integrator moving forward is going to explode.

What all of this actually means for those of you who aren’t web designers is that interfaces on control devices including everything from PCs to touch panels to smartphones are about to become a lot more powerful. It’s also likely that the battery life on those devices is going to improve.

The internet of things

Otherwise known as machine-to-machine communications or M2M, the internet of things refers to autonomous communication of hardware devices over networks. Sensors installed in devices report on various parameters, and various pieces of processing make decisions based on that data. The very clever and slightly frightening part is that all of this data could be stored in the cloud, allowing seemingly unrelated devices to make decisions based on it.

The technology has already seen widespread use in applications such as control engineering, but in this case the sensors installed in processing or manufacturing plants are generally located on internal and largely secured networks. A temperature rise here, results in an appropriate response from a processor and cooling system somewhere else.

The next stage is the use of cellular networks, similar to those used by mobile phones, to connect devices to the internet. With effective M2M, equipment can provide information about use (or misuse) trends or single events. Machines can be networked to each other to develop statistics on operating performance, predictive diagnostics, downtime analysis and a host of related monitoring and control information. Actionable decisions can be made quickly, with clear, cost-saving advantages.

All of this information should help AV manufacturers drastically improve the reliability and functionality of their equipment. It could also help AV rental companies keep track of their stock, and record how customers are using it.

Those of you with an interest in science fiction will no doubt have noted that this is how Skynet was born, and that projectors capable of thinking for themselves are possibly only a decade away.

These are exciting times for the application of AV technology in all walks of life. Innovations in other areas, in particular the world of the internet, are set to have a major effect on the way we think about the systems we design. The once impossible will continue become possible, and the possible is set to become more and more probable. I can’t wait.

Article Categories