Hey, Everybody. Welcome to Five Questions. Today I have Jon Follett from Invo Boston joining us as well as Scott Sullivan who is from Invo Columbus. Welcome guys.
Today we want to talk about a concept or area some people are calling meta-products, or Internet of Things, or ubiquitous computing, or pervasive computing, and we want to have a discussion around this with Scott and Jon. I think we’ll just kick it off straightaway and get into it. I want to get both of your opinions on: How do you think about this, or how do you define it in your own minds? This doesn’t have to be a decisive definition of the Internet of Things or this area, but how are you guys thinking about it right now?
Scott, why don’t we start with you?
What’s really interesting about this whole concept to me is just the “bytes and atoms” of the whole things; it’s combining of bytes and atoms, atoms being physical things that exist in our lives, that aren’t on a screen, and then bytes being, then, the data associated with that and data that is collected from those things.
I feel that the knowledge that I get from these meta-products that I interact with every day, whether I see it or not, is the most interesting part of them. I now have this data shadow that follows me around, and I can reference it and I can see it and I can interact with it, and that’s something that I never really had up until maybe a year ago.
Can you tell me a little bit more about what you mean by data shadow?
I like the metaphor data shadow a lot, because a shadow is something that’s cast by a physical object. It’s data that’s directly associated with these physical objects, or physical processes, that happen in our real life. For example, I wear a Fitbit
everywhere, and I know exactly how many steps I’ve taken, and I know approximately how many steps it takes me to walk to work every day, and that kind of information and that data is something that I didn’t have otherwise.
That’s really interesting, Scott. I think, for me, Erik, what the Internet of Things and meta-products, how that’s defined for me, is within the realm of the technology that makes it possible. We’ve got these low-power sensors, which have become extremely inexpensive to acquire, and those are what we’re using for the data-gathering in something like the Fitbit. Then we have these actuators which allow you to turn things on and off, like you can switch off and on pretty much anything remotely from lights to your HVAC system to video camera. Then you’ve also got RFID tags
for tracking things, for instance, like perhaps tracking Scott when he’s coming to work. All of this is generating massive volumes of data.
As a software firm, Involution is getting involved in everything from visualizing the data to analyzing it to providing context for the people who are going to be taking action on it. On the back end of that, you’ve got all the data-routing, you’ve got your capture, storage, your management. All of these things have come together in such a way that it makes it possible for, as Scott so eloquently put it, allow the physical and the digital to come together in these meta-products.
What I hear both of you saying is this idea of: How are we making this data that we have, that’s already present but maybe not visible … how do we make that stuff visible? Or it’s this idea of extending capabilities to physical products. How do we enable through sensors or actuators or other devices as these are getting cheaper, how to invest smarts into physical devices that offer up new capabilities, whether it’s interaction capabilities or if it’s data collection type capabilities, to physical products.
I think that’s a good way of looking at it.
Before we get into this a little deeper, I want to try to get this as concrete as possible for our listeners. Are there some examples or things that are already out in the world that people might be aware of to try to get their heads around what we’re talking about today?
Sure, I’ll dive in on that. One of the products that I actually find very interesting is produced by Verizon, their fiberoptic network FIOS, and one of the things that they’re offering the users who have this huge bandwidth capability to their homes is this … we’ll call it home monitoring
, right? This can range from everything from turning on different lights in your house to running your security cameras, to perhaps unlocking the door for the teenager who forgot their key, basically the home automation that can be part of such a system.
If you start looking at that, you could see how that could impact your life in a positive way around energy usage, around things like system security, and really make your home, the infrastructure, accessible when you’re not there.
That’s a real concrete product that’s related to the Internet of Things, that’s online now. You can get it. People use it, maybe, if they have an older relative they want to check in on but can’t be there physically. There’s all sorts of potential for positive impact in a product like that.
Continuing on with the home automation thing, I think one of the more popular examples is the Nest
. The Nest essentially is a thermostat that you can control from your iPhone, from anywhere, and that makes it … it’s internet-enabled, but it also does another thing. It takes all of your data that you produce in terms of when you turn it up, when you turn it down. You say when you’re home, you say when you’re away. It can also actually detect if you’re home; it has a motion or a proximity sensor on it, so it can say, “Oh, I see that you’re home. A physical person is walking in front of me; I can see that.”
I think that’s where the invisible interface comes in, and I think a lot of people have a hard time grasping what that would be. It’s that, after a certain amount of time, my Nest knows my exact patterns and it can make calculations and have my house at a temperature that I like without me doing anything at all. I walk in and it knows that, yes, I’m probably going to be there at that time, so it gets it to the temperature that I would have set it at; and it knows when I’m not there so it can lower the temperature, save on heating costs and things like that, and prepare everything.
It’s not just that it’s being tracked or that I can control it over the internet, necessarily. It’s more that it’s using all of the data that I’ve ever given it to give me something back that I no longer have to worry about. I don’t think about it anymore. It’s not an active part of my life. It’s a very passive system at this point.
The next question I’m interested to get your opinions on is around idea of: why do designers, whether they’re interface designers or interaction designers, who do designers need to be thinking about this, and really what are the opportunities in this off-screen space?
I think, as designers in the 21st Century, we’re really faced with a paradigm shift in what we do every three to four years. I think you can really expect that … we all, or at least the designers from my generation, started right as the Internet was coming to the fore. Mobile wasn’t a blip on anybody’s radar, and then all of a sudden the Internet explodes, implodes, and then explodes again, and then you’ve got your mobile computing as the next shift along with all the tablet computing.
Now we’re looking forward to ubiquitous computing, or the Internet of Things, or meta-products. We’re looking to going beyond the screen, and we’re talking about gestural interactions. We’re talking about interacting with our environment.
I think this is the next great shift for interaction design, and of course, there’s plenty of time periods that we could outline in the computing history that came before the internet, and I think looking forward, there’s going to be many shifts that happen after this; but I think part of being a designer these days means having an appetite for this level of … let’s call it “professional chaos,” … right? … so the need to be always learning, the need to be always looking forward at the technology that’s being made available, and in some respects also being ready for these paradigm shifts, the space between them, to get shorter and shorter.
I just arbitrarily put the three to four years as my expectation, but my guess is that these are going to start coming a lot faster than we think, and there are all sorts of emerging technologies we can talk about that need design desperately, and that’s really not in the purview of what we’re doing here, but the idea that you must be ready to change and apply your design skills to a new set of technologies that’s coming down the road.
In terms of specific opportunities, we work with what we have. We don’t design for screens. We don’t design for phones or anything. We design for people, and as these products are continuingly entering people’s lives and are becoming more and more pervasive, there’s more and more of a need for us to craft those things and make them better for people to use.
It’s extremely fun, by the way, getting off of the screen, doing things that basically we just couldn’t do a couple of years ago. It’s really exciting.
I think both of you guys are onto something. We talked to Haig and Dave a couple episodes back about design education, and really the idea of — How do we train design students to think beyond just the two-dimensional screen and think about interaction design as something much larger than the internet, or something larger than even screen design. How do people engage with other people, or how do people engage with information?
I think this is just an extension of that, and designers should be thinking beyond just screen design and really thinking about — How do we engage the world in deeper ways?
Scott, something you said, I think, gets us into the next question, which is — So what if designers are interested in getting involved with this?
I’ve heard from a lot of people that I’ve talked to, designers saying, “Yes, that’s really interesting, but I don’t know where to get started.” What are some of the tools of the trade, or where should someone look to get started working with meta-products or the Internet of Things?
I think that the whole getting started part, as a designer, is the hardest in this specific realm, because it’s extremely intimidating. Up until this point, everything that we’ve seen that has come out of this, has been built, has been highly-engineered, extremely well-crafted, well-thought-out things that … everything I interact with has had a team of brilliant engineers working on it, so it gets extremely hard to be like, “Well, I could do something like that.”
In terms of tools, I think the de facto standard here is the Arduino, and what makes the Arduino so interesting is that it was not made for engineers; it was actually literally conceived entirely for designers to do exactly what we’re talking about right not. It’s not just that it’s easy. There’s a path to learning these things that’s already been paved that is specifically designed for designers to work with how designers’ minds may work, which is not entirely but in a lot of ways in a visual capacity, and getting lots of great feedback from things that you do.
You start really small, and as soon as you get an Arduino, it’ll take about ten minutes and you’re already looking at code, you’re changing code around and seeing what it does, and that’s how you learn and that’s how you move forward.
On top of that is the community around the Arduino … well the Arduino and Processing … is that it’s extremely helpful. These people are … I will ask the most asinine questions, and people will be very nice and they’ll tell you, “It’s just this. Just do this little thing,” and then you learn and you keep going, and you move from there. It’s not something that you need a degree, basically; I don’t need a computer science degree to get something happening with these devices.
Yes, I think that’s something that’s really important to remember, that you don’t need formal training in engineering or programming or software development to really start getting involved and to start making stuff, and like you said, most of the stuff is open source and there’s large communities that are very supportive and really enabling newcomers to come in and learn, and helping people get up to speed and get making stuff really quickly.
I think to speak a little bit about the culture that Scott was referring to, I do think that this runs in parallel to the growth in what we’ll call “maker culture,” the hand-crafted aspects of trades that are really seeing a resurgence, I think, in the United States right now; and running alongside of that is this idea of hacking the technology that we have on hand.
We all have iPhones, and we all know at a certain point we’re going to upgrade those iPhones; but in that cycle, we’ve got these incredible computing platforms that we’re no longer necessarily using on a day-to-day basis, like you’ve got your previous iPhone sitting in a box somewhere, or I’ve got a stack of old laptops that I don’t know what to do with.
So in addition to the Arduino or the Raspberry Pi or the sensors that you might pick up to work with those, there’s also reusing all the technology, all these great pieces of technology that we’re no longer using on a day-to-day basis.
I would think for getting started it might be very well interesting to unpack your old laptop or your old phone and see the parts that make it up, because honestly, if it’s just going to into the rubbish heap anyway, it’s worth seeing if we can leverage some of these pieces of technology that are getting “left behind.” I think that’s another way you can look at the getting started question.
I think those are really good points from both of you, and I think they both lead into the final question which is: Looking at this development from hacking and just making these really crude prototypes, functional prototypes, that offer up these new capabilities, moving from there to polished designs in the consumer electronics space or even pushing some of this technology down into infrastructure. There was a recent article in MAKE Magazine about the Sifteo Cubes
and their movement from prototyping and making really crude models so they could communicate what was happening and play around with it and explore it, to the actual final product. I’m curious, just to get your take as we wrap things up today, what do you see as products that are in this space that are going to be released in the very near future? We don’t need to speculate about the end game and the big future of all these products, but what are some things that people should be aware of and be looking out for in the next couple of years.
To the point of the Sifteo Cubes, that was actually an extremely interesting article. I was expecting them to say, “Oh, well, you know, these are these brilliant people that came up with this awesome idea. They’re two guys out of MIT. They just wired it up, and then they started manufacturing them.” But that’s really not the case.
The path that they actually followed is something that I think everyone in the design community is really familiar with, with creating prototypes. They had their first prototype, and I think it was just maybe the idea was to put an email on one and put an email on the other one, and you could physically sort them with these little cubes.
Then it grew from there, and they made the prototypes. They were very crude prototypes. They started getting them in front of people and people started interacting with them, and they’re like, “Oh, well, I want to start playing with them.” By the time it got to the third prototype, it was actually a pretty fun and involved product, albeit crudely engineered and programmed, that they were actually playing with. People were actually using it and getting enjoyment from it.
They eventually gained a little buzz, and they got, I think, a $10 million investment up-front to start their company; but when they actually built the products for production, there was not line of code from the programming of the prototypes that went into that final product, and there was not one sketch of that engineering that went into the final products.
I mean, we’re not building iPhones from scratch. What we’re doing is we’re building something that useable that we can get in front of people, like we’ve been doing, and then from there that’s when you bring in the teams of engineers, after you get $10 million in funding.
Yes, I think iteration, as you pointed out, is key, and there’s a huge difference between going from the prototype of one to the test bed of 10 to the first beta of 100, and then finally into your mini-production of 1,000 or 10,000. There are industrial and software design techniques that need to grow with each of those iterations as the number of products you’re creating gets larger; but remember, you’re learning something with each of these cycles.
That first prototype cycle is really your ideation phase, but at the same time, you’re getting something into the physical realm, which is so exciting.
I think that’s worth remembering when you’re going from prototype to production, that there are these little leaps between the quantities that you’re producing where you’re learning so much about how to make the product great.
I think that’s really interesting, and I think it brings up a couple of points and trends in this space.
I know we’ve been talking today about the threshold for engagement of creating these physical prototypes is a lot lower. It’s easier for designers or really anyone to get in and start making something, and having something tangible really quickly; but at the same time, the way that people are using these products, it’s no longer just a consumer electronics product, but the durability of these things that we’re making … they’re getting abused and worn every day and always with us, and persistent and pervasive in their engagement with our bodies and motion.
You see this as the Jawbone UP. They came out with their first product, and they didn’t really take into account, as well as they needed to, how the product was going to get used and abused.
At the same time we’re seeing it’s easier to get into this space, we also need to be looking to places like medical devices or military equipment that has hardened specifications. How do we make these things quick, but also make them durable so that they can last the test of time, and really, as these sensors get embedded down into the infrastructure of our everyday lives, that they’re still existing and persisting and not just dying off in a month or a year, or even two years.
Yes, I think that’s a really good point.
To answer the second part of your question about things people can look for, I think probably the item that I’m most excited to get my hands on and see the SDK for is the Google Glass. I want to get one of those for the studio as soon as we can.
There are a lot of other interesting products coming along, too. We’re doing some work with the Leap Motion sensor at the Boston studio, which essentially allows you to program gestural interactions to whatever web API you want to manipulate or things on screen that you might have locally.
Those are two things. The Google Glass, obviously, probably more like six months out, and the Leap Motion, I believe, was released a couple months ago. Those are two things I’m excited about in both the immediate future and right now.
Great. Thanks for your thoughts and opinions today, guys. I think it’s been a good discussion. I appreciate you guys joining us.