Difference between revisions of "IoT - Internet of Things"
|Line 1:||Line 1:|
== '''IoT O'Reilly Solid' - Transcript'' ==
== '''IoT O'Reilly Solid' - Transcript'' ==
'''THIS MATERIAL SALLY A. APPLIN AND MICHAEL D. FISCHER. ALL RIGHTS RESERVED.'''
Revision as of 18:34, 24 April 2017
'IoT O'Reilly Solid' - Transcript
THIS MATERIAL ©2013-2017 SALLY A. APPLIN AND MICHAEL D. FISCHER. ALL RIGHTS RESERVED.
Sally Applin and Michael D. Fischer,Thing Theory: Making Sense of IoT Complexity O'Reilly Solid, San Francisco, June 23, 2015
Thing Theory: Making Sense of IoT Complexity
O'Reilly Solid, June 23, 2015
I’m Sally Applin. I’m a Doctoral Candidate at the University of Kent in Canterbury in Anthropology (with Technology) and this talk is about making sense of IoT complexity with something we’re calling Thing theory. I tweet as @AnthroPunk.
This talk is about how and why to develop and apply trusted technology to manage relationships between people and IoT technologies. As we’re building all these technologies we’re also going to have to interface with people and a legacy of systems in the physical world and we need to figure out how to do that. We also need to understand agency, which I’ll explain, and cooperation and sociability, outcome from these complex information flows (which we call PolySocial Reality), and the design problem of heterogeneity. Lots of different systems, lots of different options. And Thing theory is one approach that we have towards resolving this.
Heterogeneous is just an idea that things are mixed up. That there are dissimilar or diverse ingredients. Heterogeneity is great, we like it, keeps our species diverse, keeps us robust. But it also can create problems if things are too heterogeneous that they don’t have enough commonality. Then there’s nothing to share or have in common. That can be a problem if it keeps people from interacting.
In the IoT, not everyone is going to have the same hardware or software. We saw this with mobile quite a bit, right? We’re seeing people with devices that are configured differently, different apps, different locations, different networks, and it causes a problem if not everyone has the same device or the same capabilities that were fragmented.
Sociability is the tradeoffs people make to cooperate. If you don’t hear anything else I say today, this is the most important thing: that some yielding on all parties is required for cooperation. If you have people that are absolutely steadfast in their opinions and they won’t yield in any way, there’s no way to cooperate. There’s no way to share information. So people and systems have to yield to be able to create cooperation to get to the goal of making things.
Designing for the IoT is designing for heterogeneity, sociability, and agency. Agency is our ability to select choices from options. And sometimes we select choices that are outside of agreed options. In this case, people are going through an intersection and they both decided to take agency. They weren’t taking the same agency, and they collided and they stopped. We don’t want that to happen with our IoT technology.
Agency is the capacity to make these non-deterministic choices from the set of options. Depending on how we can exercise options really has to do with our skill level. We may see a lot of possibility and a lot of options, but we might not be able to actually exercise them, because we don’t have the skill. Usually we can see more options than we’re actually skilled to be able to choose and enact. But technology might be able to help us do that and that’s why we like the technology for the IoT is it can create more options for us which gives us more choices. It kind of empowers us.
If we think of agency as a ratio of choices to options, sometimes agency will be predictable. If you have a friend that you know that knows you really well, sometimes you can shortcut deeper explanations in conversation because they just know you and you’re able to just kind of nod and move to the next thing. And that might be based on your culture, your social frames, your relationship. But more complex situations don’t always have that foundation. It’s not as simple as making a simple menu choice or making a simple interaction. Some of these choices require preparation and social activity to derive that context where that choice is available.
Cooperation is something that’s really required with the IoT and, as you know, with interoperability, cooperation is really important. When we look at this slide, we’re looking at a road space. That road space and the wind towers? Those are the result of a massive amount of the legacy of human cooperation over many, many years. People came together to build all the industries and all the systems that make that work. That’s engineering, civil engineering, asphalt, mining, design, regulation, all of it. With that network, we’re able to do things. We have different goods and services delivered. We are able to—it’s changed the way we do agriculture. Having roads and having technology that cooperated and created this legacy system as these people worked together enables us to do stuff. We need to do stuff for our survival.
To cooperate, we have to share information. We have to share information. Remember, if you don’t yield, there’s no way to share information. If we don’t share information, there’s no way to cooperate. Cooperation is very social. The active sharing is a tradeoff, a give-and-take, a trusted environment.
Shared experience is social. Remember I was talking about how if you have a friend and they know you? If you have a shared experience, that’s a shortcut to having a commonality that gives you a base for creating a social experience and creating cooperation.
Agency and trust are required for sociability. The foundation of what makes people social and what makes a social relationship is when people come together, they form a sort of a social contract. We have one here. I agreed to come and talk and you came to listen and maybe ask questions or think about things and maybe you’ll teach me things, too. But we have this engagement. At any time any one of us could take free agency. I could get up and leave. You could get up and leave. We could do other things.
What makes us social is the ability to choose to have the social contract to engage with one another. And the effect of cooperative social relationships depends on how well we trust each other. We have a little trust here. You trust I’ll speak and I trust you’ll be there and that’s how this works.
In exchange, we both get interesting knowledge. I might tell you something that might be useful for you. In questions or after, you might tell me something that’s useful for me. We help each other to have extra knowledge. That’s how social experience and cooperation helps us. We can share knowledge and we’re both better off.
People expect the IoT to—I think people expect the IoT to integrate with their devices. I don’t think they’re expecting that it’s not going to work that way. One of the things that I’m quite concerned about is when we have automation and we have processes and scripts that are very strict and are unyielding, people become afraid of them taking over and losing their agency and losing their ability to make decisions for themselves. That’s important.
When we see people and mobile devices we may think that they’re siloed, that they’re actually looking at their own thing that’s not related to the local locale at all. But in fact they’re actually being very social, just not in the local locale. We care a lot about this because the local locale is where the IoT is, right? We're at SOLID, we’re the Internet of Things, we’re connecting back to the local locale. When we have a community of people that are connected and we don’t know who they’re connected to or where they’re connected—are they connected remotely to, they’re cooperating with someone really far away? Or are they connected next to each other? We don’t know. The local locale becomes really important.
In this case, in this slide, we’re talking about, when we see people looking at devices, one of the things we also cannot assume is that they are doing single tasks, that their devices are doing single tasks, and that the cloud behind their devices is doing a single task. We look at human-human, human-machine, and machine-to-machine communication and it’s all going on at the same time.
People are using different filters, different social media, different applications to help corral and control some of that. But those systems are actually message generators, too, so they create new messages in a different way.
We call this PolySocial Reality. And what PolySocial Reality actually is is that we’re modeling the whole communication space. We’re modeling our social interaction today, we’re modeling analog and digital communication, we’re modeling—it’s the idea that everything lives within this umbrella of messages that are multiplexed, multiple, synchronous and asynchronous. It’s happening all the time. But the other thing that’s going on is there’s these dynamic bursts that come up. Different configurations of communication create different dynamic structures within PolySocial Reality that recede and then new ones come up. And being aware that this is going on is really important. When you think about heterogeneity, do you think about messages that your devices are going to be communicating?
PolySocial Reality is a communications model of these dynamic relational structures that may or may not result in understood communication. This is really important. If you have a situation where people are using their devices, and we see this with mobile, the communication may or may not connect. It might not be synchronous. It might be a little synchronous. It might be completely asynchronous. What we’re seeing, with mobile, that people are taking advantage of this and starting to use their time asynchronously. They’re connecting when it’s convenient for them. Remember I talked about “not yielding”? So people are using the mobile device and the capabilities it’s afforded them to be able to communicate when they want, through whatever channel they want, to who they want. So some common knowledge and common shared experience in the environment is getting left out.
In doing so, there’s an assumption that the recipient is actually going to get those messages and that doesn’t always happen. There’s a case of people that empty their inbox and just say, oh, well, I think that they’ll just write back. If it’s really important, they’ll write back. A lot of messages get missed. And if you miss messages and don’t have enough overlap, if it’s critical, then there’s a lot of trouble that happens in terms of communication that affects us in our physical environment.
When environments are social, because of that agency, there can be that fragmentation but there also could be cooperation and it’s not an either/or, it just depends on situations and context. So it’s stretchy and we need to remember that.
Looking at this slide, we can see that PoSR has this complex interactive environment. This is one, tiny slice of mobile use and we all know what we see with people using devices. They’re walking into things or missing messages or getting too many ourselves and not being sure how to control them. This is just like one little bit.
PoSR emerges when people are doing things and making these messages, because they actually want to maintain relationships. They want to connect to other people. They’re trying to solve a problem. Trying to cooperate, access different information, and try to connect with one another, to maintain their relationships. Because they actually want to have that shared experience and connection. But it’s not necessarily going on in the local locale and that’s kind of the problem with part of that is that the cognition’s happening in different places.
PoSR is modeling and representing these multiple relative viewpoints, these dynamic relative viewpoints that show up in relation to each other. It’s describing the structure of fragmentation and multiplexing and individuation that can happen, but also this connection and cooperation that’s all happening simultaneously.
It’s going to get more complex. We have AR, which is another layer that’s going to be added to mobile devices and the IoT, what we’ll have augmented reality, so people are looking at different messages and not everyone’s using augmented reality. That’s another heterogeneity piece. VR has a different interaction model. Even AR’s a little more social, whereas people add glasses, they can still see things. But once you seal things up, remove people, put them on the network, that’s even more heterogeneous.
If we have—I had put “40” in the abstract, but 30 to 80 billion IoT devices in five years, and that’s just the devices, that’s not the messages that the devices are creating, something that we should solve.
Our goal for the IoT is, let’s not increase the complexity.
Messages need that coordinated process because if we don’t coordinate the process of messages and receiving, we don’t receive the messages. If we don’t receive the messages, things can happen. Bad things. Like boats driving through bridges because they didn’t realize the bridge was closed or the bridge was the wrong size or they were looking at their cellphone.
Knowing human needs, as I said earlier, helps to coordinate many messages because you can shortcut through shared experience. We think that PolySocial Reality (PoSR) with the IoT is helpful because because of the complexity, developers can use the idea of PoSR, could use the understanding that there is this dynamic relational structure that does change the way people use time and space to help them develop agents to manage that in a better way.
Agents are required to do things like mediate communication. So they can help if messages aren’t connecting. They can be aware of that and figure out how to help that. Agents can invoke agency entrusted context, so they can seek and gather information and then, if they need to share it, if they need it to negotiate with another agent in a trusted environment, they can do that on someone’s behalf.
Thing Theory. Thing Theory is our suggestion of where we might go towards organizing this. If you’re not familiar, Charles Addams wrote a cartoon called “The Addams Family.” It was a macabre, Gothic family. And they had what they called a Family, Friend and Retainer that worked as sort of in a servant capacity. And it lived in a series of tabletop boxes throughout their environment, or wherever they happened to be. Thing is a disembodied hand that would just show up in context and do things.
As an agent model, we like Thing. And we think Thing—if you think like Thing, it might help manage these multiplexed communications, because remember, the goal is actually to design for the heterogeneity and to figure out how to make agency-based systems that are useful and unobtrusive.
Thing listens and follows directions. It changes location. It anticipates needs. It takes action by offering solutions. It assists with tasks. It communicates information and uses knowledge that it’s built up. And it offers continued communication. Here it is listening and following directions:
Clip of Thing and Gomez in Living Room:
[Thing is scratching Gomez's back in the living room from its box in the living room with a hand-shaped back scratcher.]
GOMEZ: A little higher and to the right, Thing. Yes. That’s it. Thank you, Thing.
I have a couple of these to go through. Here it is listening and following directions, but changing location, anticipating needs and taking action by offering some solutions.
Clip of Thing and Morticia in the Greenhouse:
[Morticia is crying in the Greenhouse and Thing comes out of it's box in the Greenhouse offers her a handkerchief.]
MORTICIA: Thank you, Thing! I’m sorry I made a spectacle of myself, Thing, but my world is come down around my ears. There’s another woman. It’s true, there is. I knew you’d be blasé about it. You’ve never been married. Or have you?
[Thing gives a 'thumbs down' gesture]
MORTICIA: Thing, what am I going to do?
[Thing makes a fist and we hear a hitting sound.]
MORTICIA: Oh, no, no violence. I wouldn’t care for it.
[Thing makes a scratching gesture and we hear a scratching sound.]
MORTICIA: No eye scratching.
[Thing makes a yanking gesture and we hear a pulling sound.]
MORTICIA: Nor hair pulling. Thank you, anyway, Thing.
MORTICIA: Thing? Wish me luck.
[Thing crosses its fingers.]
MORTICIA: Thank you, Thing. You’re a true friend.
And Thing also can assist with tasks.
Clip of Thing and Uncle Fester in the Living Room [Uncle Fester is dictating a letter to Thing in the living room, who is writing it down from one of its boxes in the living room with a quill and ink on paper.]
UNCLE FESTER: I can truthfully say I have never let breeding, social position or looks go to my head. I am looking forward to meeting someone of the opposite sex with these same qualities. Signed, Modesty.
[Thing hands Uncle Fester the writing and Uncle Fester reads it.}
UNCLE FESTER: Excellent. You have a very delicate handwriting. Thank you, Thing.
And Thing communicates information, uses knowledge and here it is, offering different solutions.
Clip of Gomez, a guest, Puglsy, Thing and Morticia in the Living Room [A guest has fainted on the living room sofa and Gomez is trying to revive her.]
GOMEZ: There she goes again.
[Pugsly comes by with a toad.]
GOMEZ: Pugsly get that toad out of here. Once you take a toad out here, things like this upset him. She’s obviously not a well person. Quick, someone get a glass of water.
[Thing's box in the living room by the sofa opens and he hands Gomez a glass of water.] Thank you, Thing. [The fainted woman awakes sees Thing and faints again.]
GOMEZ: There she goes again.
MORTICA: [enters] What’s all the commotion, Gomez?
GOMEZ: Fred Waters sent us a fainter.
MORTICIA: Well, maybe you better get her some smelling salts.
[Thing's box opens and he hands Morticia a bottle of smelling salts.]
MORTICIA: Thank you, Thing.
Thing offers continued communication. This next example is actually pretty interesting, because at one point it offers an option that will create more options and we’re interested in that, because that increases agency and increases capabilities.
Clip of Thing and Wednesday in the Entry Hall [Wednesday is looking for Lurch in the front entry hall.]
WEDNESDAY: Lurch, Lurch, where are you?
[Thing comes out of its box in the front entry hall and points to another room.]
WEDNESDAY: The playroom? It isn’t nice to tattle, Thing, but thank you, anyways.
[In the playroom]
WEDNESDAY: Lurch? Where are you, Lurch? It’s me, Wednesday. Do you hear me?
[Thing makes a knocking sound and opens its box in the playroom, pointing out Lurch's hiding place to Wednesday.]
WEDNESDAY: [whispers] Thank you, Thing.
Lurch actually needs to change his preferences with the Thing agent so it won’t rat him out. Or it will after he gets enough time to hide.
Thing as an agent model is interesting, because it’s location-ware, with multiple location access points. It senses needs in multiple locations. It responds to needs with action, it takes action. It works as a data store, so it builds long relationships over time and stores that information and learns, because it’s a trusted environment. In doing so, it can expand the agency of network members by offering choices they might not consider or its capabilities to synthesize things enables them to have those extra capabilities, which I talked about earlier.
A Thing-agent is what we call, in our model of Thing Theory, what we call our agent, and it transforms this complex jumble of services into the successful technological context, because it manages both the technology and the human relationships. It enables humans to have as much agency as possible in an IoT environment. Remember, we’re concerned about scripts and processes restricting choice and options for humans.
PoSR is a given. It just is. It motivates us in our research to look for gaps. We’re looking at those gaps in how information is distributed. When these dynamic structures emerge, what are we seeing that’s missing, that’s not synchronous, how is behavior affecting that? The system that we advocate for should be flexible enough to accommodate both people and the choices and options they want to make, away from something stricter and more of a decision tree.
We also think that the system should politely and faithfully, in a trusted environment, serve users. But it also needs to have its own agency to do the jobs of managing things that it needs to do and facilitate system functions.
A typical IoT application system is a discrete system. There are sensors and actuators, micro-controllers—it might look something like this. And our Thing-agent manages that. It manages one or more hardware components. And these components register themselves with Thing. If they don’t, Thing has network access and it can look up information about what’s connected to it to understand it. Thing can manage components’ external data and Internet access. Thing is managing the information collection of these devices.
Thing-agents work with other Thing-agents or other meta agents that are not part of the Thing system. There are going to be plenty of vendors and ideas about how to manage things in a meta way and they’re going to need to negotiate with each other, because that’s a lot of heterogeneity between devices and vendors, as well. We’re trying to encourage the people that are building these systems to consider sociability so that agency can be preserved and information can be shared and cooperation can happen.
Using this information from components, Things can share information without betraying the trust of the relationship, of the user if it needs to for certain functionality. We’re looking at things with Thing Theory—we’re looking at the autonomous vehicle instantiation. How are cars on the road social with one another to negotiate movement and so forth?
A Thing-agent is a manager and a participant in the network simultaneously, and its roles are to manage components and Thing agent relationships and to manage relationships with people. It’s paying attention to the system of components, but it’s also keeping and understanding and building a relationship with the people in the environment over time.
A Thing-agent has some principles. It’s a meta agent. It operates over an entire context. Its capabilities are based on what is connected to it. Its ability to extend, its ability to have capabilities is limited by what’s available at a sub-component level for it. It must be context-aware. It has to be able to identify different capabilities in different contexts and to select the most appropriate ones to offer as options and choices. Thing it works with other meta agents. It helps expand their capabilities, too. Just like when we come together, if we teach each other things, we both leave enriched, those agents can work together to exchange information and exchange permissions to help each other out in an interoperability way.
To think like a Thing agent, build up a composite pieces and see how they connect together. Thing needs to function without instruction at a low level and discover the relationships within the environment. Thing has to manage all those messages and manage the relationships around those messages.
Be context-aware. What you’re making in the IoT has a context. It has context in an environment and that context helps with figuring out what options and choices are available. Offer many choices. This is really important because, with agency, what we see in some of our writing on scripts and processes, which is at posr.org in our publications, we look at this. We look at the constraining and brittleness of systems that are—where the processes are too refined, too pre-decided. And in funneling people through limited choice, it’s really easy to program, or it’s easier to program, but it limits people’s ability for what they can choose. This is worrying for us as an aggregate of our population.
Offer choices and offer choices that give people even more choices. Which is opening options. Open options so that people have much more ability to discover and supply knowledge and skill through an assistance of a capability that they didn’t know that they could have. Because technology can do that really well. The technology can open these options and, because it has processed some things in the background that the human couldn’t do, it enables them to make some other choices.
Remember, people are expecting cooperation and interoperability and we have to address that, especially when we’re having mobile AR, VR, wearables, IoT, things connected, civic systems—could be a really interesting IoT.
Our Thing agent must interface to user agents—that’s people and others. It can interface between multiple IoT environments that may be using and supporting the same location or a different location. If I go somewhere and I bring my Thing with me, my Thing-agent with me, it could help negotiate with the system that I move in, to change the temperature or do the things that might need it to do. Thing serves at a minimum to inform users and user agents of capabilities of the IoT in terms that make sense to the people and other things interacting at a higher level in the interface.
How do we do it? Well, we suggest and what we’d like to do, what we’re trying, is implementing Thing Theory as a simulation. Not simulating it and then building it, but keeping the simulation as part of the programming environment, part of the operating environment. It simulates PoSR and gives the Thing-agent the ability to forecast, communicate and repair emergent structure issues as they come up. If you have an instance of a deontic logic structure and there’s something that you know is not connecting, maybe there’s a different way to address that through a fast simulation before actually giving an option or a choice. Simulation improves the probability of a smoother operation and coordination with humans, but it’s built in.
It also gives true agency to components or things that need it because it doesn’t always have to be a master-slave relationship. It could be a relationship where components do what’s required and provide information as needed.
This is the part of the talk where I’m repeating things that Mike and I talked about and wrote from our paper and he is much more gifted in the logic but I am giving you this information because this is sort of the key to how we are approaching this problem.
Simulation requires this approach to modeling that can deal with potentiality as well as action. Deontic logic has been the choice of logic for what we’re doing. It includes these modal operators and has been demonstrated to be a useful basis for constructing models for sensitive, real-time, time and location-aware interactions between agents of different types. That’s exactly what the IoT needs, with heterogeneity and multiple messages and integrating people’s phones. The simulation is making it easier to introduce new agents, as well, and new conditions and new outcomes into the model.
The two papers that we have used for this deontic logic are Castro and Maibaum. They present a logic that’s suitable for representing interrelations between agents of different types, including agents that have agency, and it’s extended a treatment of representing reasoning with agency. Dong and Li as well. Castro and Maibaum is 2007 and Dong and Li is 2013. They look at axiomatic, algebraic formulation of that logic that’s likely to be useful. An advantage of modeling with this logic is that it’s easy to introduce new agents and new conditions and new outcomes, and those are all the things that we’ll be subjected to with the IoT.
Again, heterogeneity, sociability, and agency. We have diverse things—diverse hardware, diverse software, diverse apps, diverse people, diverse cultures, diverse time, diverse space, diverse management of time. Designing for the IoT has to take into account all of this and also figuring out how not only we as people are going to be made to be social, but how we’re going to design sociability for these devices and their outcomes. Again, it’s a tradeoff. People have to yield and systems have to yield to be able to have a cooperative environment.
In programming and some of the systems that we’ve seen, it’s great. People design a program and it’s out there and we have to use it. Maybe it’s not how we want to use it and maybe it doesn’t work for us exactly. But that’s what’s on offer and that’s what we use and we’ve adapted to that. But we’re not really that thrilled with it over time, because it limits some of the things we have to do. To do that we figure out how to work around it or how to not do it or, in big companies we see that people do workarounds. They do what we call covert agency. The people designing the processes and the programs think they’re working, but the workers are kind of doing other stuff to make that process work.
We want to be sure that there’s yielding, so processes aren’t too brittle and we don’t have a false process running that’s not really actually working and then people struggling to make the process work rather than having a cooperative environment where we both learn together and share.
Also, we don’t want to do this. Thing really must be cooperative. It might have—we want it to have agency, but we don’t want it to have that kind of agency. We want it to be friendly to users and we want a system to work.
Back in April, April a year ago, Radar asked, Will the IoT be won by startups or established companies with lots of resources? My reply at the time, which I actually still believe, the IoT is going to be won by whoever successfully solves this heterogeneity problem, because devices do need to be social. We’ve hit a point both in our culture and in our technology where we have to yield and play together or we’re not going to accomplish the things we need to to get us these new capabilities that we actually want to have.
The paper that this talk was from is called Thing Theory: Connecting Humans to Location-Aware Smart Environments. The logic is explained more, there’s much more there for you if you’re interested. That’s at posr.org/wiki/publications and it’s publication #6.
Thank you for your time, I appreciate it. I know I’m right between you and a party, so, thank you very much.