Around 40 years ago, I was an 11-year-old kid, dreaming about the Commodore Pet. It was one of the first personal computers launched, along with the Apple II computer and the Tandy TR80. This technology marvel, that booted up using cassette tapes and had 4K (yes, just 4,096 characters) of memory, cost several thousands of dollars.
It seems to me there’s been a big flip in innovation speed since those days. It used to be that the technology took FOREVER to catch up with what we wanted to do with it. We constantly had to optimize everything to remain within the available memory and processing power.
For example, when I was a university student in the 1980s, I had a summer job doing AutoCAD for an architecture firm. There was a real skill to doing things in the right order, because of the limitations of the system. If you accidentally zoomed out more than you needed, it might take five long minutes to get back to the part you were working on. And we already knew that we wanted to be able to offer 3D video walkthroughs to customers, but even at 640 pixels wide (full screen!), the necessary power was a distant dream.
Now it seems like we have the opposite problem.
We have incredibly powerful technology that seems to jump in power overnight. Any basic smartphone has way more processing power than the first “supercomputers” of my youth such as the Cray 1 (although it was much more comfortable to sit on!).
Access to cloud computer power is now basically unlimited for all but the most exotic uses. And the power of algorithms is becoming staggering. AlphaGo Zero took just two days to better 2,000 years of human experience in the ancient game of Go, and it did it simply by playing against itself, millions of times — literally “evolving” faster than technology ever did before. Meanwhile, robots are learning to jump and do backflips — and looking incredibly human while they do it.
There are always more things we’d like to do, but the important difference is that the power available today is clearly far ahead of what we know what to do with it. I’d argue that we haven’t even fully taken advantage of technologies introduced five or even ten years ago (mobile, in particular) — and the adoption gap is growing.
The gating factor is now clearly not the technology; it’s all the changes we need to make to our organizations and society to make the best use of it. We all know that when projects go wrong, it’s rarely because of the technology itself, but because of the all-too-human frailties of the people and systems trying to implement and use it.
How can we speed up adoption?
There have been incremental changes in better management of technology and business — flatter organizational charts, the use of collaboration tools, a greater realization of the damage the wrong KPIs can do, rising awareness of the importance of creativity, and attempts at diversity (proven to lead to more creativity). But progress has been patchy (I was horrified to see that the percentage of women in IT has actually gone down since I was a kid). And all the most interesting new uses of technology cut across traditional walls between corporate departments, companies, or industries — something that multiplies these organizational struggles.
What can we do to improve and evolve the science of implementing new technology as much as we’ve improved the technology itself?
For more insight on tech trends, see CIO Priorities For 2018: IT Thought Leaders Share Their Predictions.