Many years ago, when screens were small and expensive, engineers used tablet surface to operation CAD systems. But this is not a tablet like iPad or Android device. Take a look on the picture above. You can find those in museums. I’ve seen one last year in Computer Science Museum in Silicon Valley. The display is not dynamic, menu items are persistently located. So, part of the tablet is menu and other part is used for cursor control and input. I still remember very early days of my CAD experience with these devices actually connected to CAD software and used for the input. These things make me smile today, but back in 1980s it was a fancy way to input information and operate CAD systems.
Fast forward into 2016. My attention was caught by Benedict Evans article – Imaging, Snapchat and mobile. The situation is absolutely different. Screens are cheap and easy to use. We are replacing physical keyboard and input devices with touch screens, digital cameras, virtual interfaces and leap motion sensors.
My favorite part of the article is about input revolution. Sensors, photo cameras and touch devices are replacing everything else.
This change in assumptions applies to the sensor itself as much as to the image: rather than thinking of a ‘digital camera, I’d suggest that one should think about the image sensor as an input method, just like the multi-touch screen. That points not just to new types of content but new interaction models. You started with a touch screen and you can use that for an on-screen keyboard and for interaction models that replicate a mouse model, tapping instead of clicking. But next, you can make the keyboard smarter, or have GIFs instead of letters, and you can swipe and pinch. You go beyond virtualizing the input models of an older set of hardware on the new sensor, and move to new input models. The same is true of the image sensor. We started with a camera that takes photos, and built, say, filters or a simple social network onto that, and that can be powerful. We can even take video too. But what if you use the screen itself as the camera – not a viewfinder, but the camera itself? The input can be anything that the sensors can capture, and can be processed in any way that you can write the software.
The article made me think about transformation of engineering input methods. In the past we used sketches and drawings. Still, most of engineers are using this method of input. But if you think, such archaic method can be transformed into digital scan, photos and touch user interface combining elements of digital design and scanning of outside world.
What is my conclusion? The next decade will open to engineers many news ways of interacting with the outside world. As a result of input revolution the design lifecycle is going to change. We will rethink the way we design, the way we capture requirements and the way we validate and experience the information with an outside world. Just my thoughts…
Want to learn more about PLM? Check out my new PLM Book website.
Disclaimer: I’m co-founder and CEO of openBoM developing cloud based bill of materials and inventory management tool for manufacturing companies, hardware startups and supply chain. My opinion can be unintentionally biased.