Tom Chi  

Big Mother is Watching

November 19th, 2004 by Tom Chi :: see related comic

We’ve all heard of Big Brother — that omni-present technological eye which tracks our every move with nefarious intent. What hasn’t been heard of or talked about much is Big Mother. I use the term to describe all forms of ‘benevolent’ surveillance. Some examples include tracking chips for your children, the system which allows 911 services to locate by cell phone, or the video camera at your apartment door.

The line of course, is not a crisp one. The apartment camera catches mostly innocent people going in and out and the tracked child might have a completely different view on whether the chip is “benevolent”. Indeed, we are ‘watched’ daily by systems designed with good intentions, but their handiwork can be unsettling. As we move toward having more of our lives in digital form, we enter a phase where Big Mother can have more significant impact in our lives.Consider email. Many of us use web-based mail clients or have our mail data on a server. That email contains a lot of information about us and presents interesting opportunities for Big Mother. For example, what if our mail was spidered to search for warning signs of dangerous drug addiction or suicide risk? It could then send a special email describing where to get help, or notify friends and family. Would we want this? How many lives would a system like this need to save before it was deemed valuable enough? These are hard questions.

There is a very fine line here. If the technology seeks only to aid you and not incriminate you, it can be justifiably labelled benevolent. But a small expansion of its capabilities toward, say, flagging likely candidates of domestic violence or drug trafficking, and you have a Big Brother system.

Beyond email (and data on servers, generally) we have the rise of location-aware devices. Whether they are based on GPS or cellular triangulation, they present the possibility of a world where the physical location of each individual is continuously trackable. While there are situations that this could be valuable and beneficial (big Mother scenarios), it can also rapidly decay into bad modes of operation.

The question to pose, as designers of technology, is whether there are ways to isolate the good from the bad when we design products. Could we define benevolent surveillance and only allow for those modes of operation? Could we simultaneously outlaw bad surveillance and really enforce it? Or should we just allow the technology to evolve and adapt our own notions of public/private infospace as needed?

33 Responses to “Big Mother is Watching”
Dom wrote:

But isn’t it the case that the only people who would worry about a big brother/mother system is the ‘domestic abuse’r or ‘drug trafficker’. People with a clear conscience shouldn’t have any problems with it. Or am I missing something? I would prefer to have a chip in me, than be kidnapped and not be found.

David Heller wrote:

uh, no? Imagine if Ashcroft had this chip technology. Privacy is (so far) a fundamental yet eroding right in the US anyway. Like freedom of speech it is a very important piece of freedom. Not only do we have a right to congregate and assemble but we have a right to do so with repricussion. It could be said that the problem isn’t the surveillance, but the surveillor but that similar argument is made about firearms and I don’t buy it either. Also, we have already proven that the technology is to blame … take file-trading, eh?

I think the example in question in the comic is very compelling. Surveillance is but a glimpse usually out of context and thus can be very misleading and inflamatory.

No! I don’t want to take away my right to revolution thank you very much. ;)

Kaarthik wrote:

Think about a chronic cardiac patient folks, who leads a fairly normal life. Interrupted by brief incidents of cardiac arrest which if not administered on time, could prove fatal. My opinion is that, these technologies are compelling if used with the consent of the ‘user’ (for lack of a better word).

Bob Salmon wrote:

We had some problems at church recently. We like to think anyone can walk in and talk to someone, pray etc. (Because of the risk of theft, this is restricted to the time when services are on.)

Someone, possibly homeless, (I wasn’t there at the time, so this is second hand) came in and ended up getting quite aggressive to a church regular. Better locks aren’t really the answer as we don’t want to keep people out (at least, not while we’re inside).

It was suggested that we put CCTV cameras recording to hard disk around the church doors. This, apart from repugnant in my view, is actually illegal in the UK as entering a church suggests your religious inclinations and so is covered under the Data Protection Act.

What might be the answer in this case is better training for people in the church on how to avoid tricky situations such as this one in the first place (steering the conversation differently, body language etc.)

As for chips in people - all sorts of horrors here. Your identity could be detected remotely (assuming it’s RFID). If detected, what’s to stop a kidnapper trying to remove it, which is likely to be painful and harmful to health.

Kevin Cheng wrote:

First, to nitpick, it’s spelt “Surveillance” nor “Survillience”. Given that’s the topic at hand, I felt it wise to correct that. =)

But isn’t it the case that the only people who would worry about a big brother/mother system is the ‘domestic abuse’r or ‘drug trafficker’. People with a clear conscience shouldn’t have any problems with it. Or am I missing something? I would prefer to have a chip in me, than be kidnapped and not be found.

I have a clear conscience and never have any intentions of terrorizing USA save for protesting. Does that mean I don’t mind them reading all my e-mails and listening to all my conversations for “national security”? I think not.

Aside from pure privacy rights - something people seem to care less and less about - there’s also technological and interpretational issues.

1. Who Watches the Watchmen? When your crime or potential to commit a crime is determined by those using the surveillance technology, who holds them accountable?

2. False Positives. If our spam filter accidentally marks a legitimate e-mail as spam, it’s frustrating and potentially costs money but is rarely more impactful. If a “benevolent” surveillance matches a false positive, you get something like the comic strip which in the end, is mostly harmless. If a terrorist/crime tracking system gives a false positive, a person’s life is probably ruined.

Reed wrote:

This is really hard stuff– and really important! Every engineering and computer science course at university should have a required ethics component.

What it comes down to is use. You can’t just decide to avoid inventing cameras or RFID or embedded GPS devices because of their literal nature, you have to consider use. Will this device be used for evil? For almost any device, the answer is “some people will”.

(As an aside, there is actually IMO another aspect to technological ethics, and that is that usability is itself ethical, and the interface and usability of a technology can influence its use.)

Tom Chi wrote:

Right. Drawing the line between benevolent and non is already difficult. Imagine the drama that could result from a false positive informing your friends and family that you have a drug addiction when you don’t. Still, it is very easy to see instances where such technology would be extremely helpful. How will you explain to the parents of a child who died that they weren’t informed of the problem due to “privacy” issues?

The underlying ethical question is whether this entire class of activity (unopted surveillance) should be allowed. As with many technologies we are already we have already made decisions with our implemenation to date without having really thought them through. While this ethical carelessness could be faulted as the problem, in reality, it is not possible to know all the ways a technology could be used for good or bad. Engineers should still try their best to consider this, but which engineer could imagine that pagers would be perfect for running drugs or that a plane could be used as a bomb, or that cars would lead to so many fatalities?

Perhaps Big Mother systems by definition must be opt-in ones. This removes many of the ethical problems, but also reduces the effectiveness since many in trouble are in denial about the depth of trouble they have reached. Regardless, I think the line of Big Mother vs. Big Brother is an important one to draw when considering the future of our safety, privacy and freedom.

Another Bob wrote:

It’s interesting that the first two posts almost define the poles of the argument. As technology continues to push the envelop we must be aware of the ethical issues. But this is not new to the 21st century, think Nobel & dynamite, atomic power, bio-medical issues (beginning of life, end of life, cloning, orphan drugs, …)

Dave wrote:

I wanted point people to an article on Boxes and Arrows right now by Adam Greenfield that I think is totally relevant to this discussion.

X wrote:

Why is it always the refrain that if one has a clear conscious, why not submit? I realize there is an international audience to this website, but I, for one, live in the land of the free (or so the brochure says). I, for one, do not hold the truth that a free man can submit. Submission is, by its very nature, slavery.

Loaded words, and a charged argument, I’m sure, but let me find some friends of mine who may agree. Depending on which Google results you believe, either Thomas Jefferson or Benjamin Franklin said, “Those who would trade freedom for security deserve neither.” Reagan borrowed the sentiment and stated there was a totalarian society at the wrong end of that stick. Thomas Paine gives us heaps to play with, but being the lazy poster I am, I’ll go with this one - “Society in every state is a blessing, but Government, even in its best state, is but a necessary evil; in its worst state an intolerable one”.

You cannot “isolate the good from the bad.” These are mere tools, and the constructive design of the hammer does not forgo its use as a weapon. While “intelligent” devices break the metaphor, you have only to look as far as your nearest “powergamer” and massively multiplayer patch notes - extremely controlled environments with extremely controlled situations that still present unexpected events that CAN be engineered out failing - to fail.

If there is more information than abstract, aggregate, anonymous profiling then you’ve already forced me to submit and be beholden unto whatever you want - you just have to be clever enough to find a suitable vector.

I, for one, welcome our new datamining overlords.

Godblessyou wrote:

The cartoon was a little misleading because it was an actual mother taking care of her kid (and not doing a very good job of it) rather than Big Mother. It’s a completely different situation for adults to be compelled to undergo this kind of treatment.

David wrote:

Actually, the cartoon isn’t misleading — the chip was put in when the kid was, clearly, a youngish child — but is being “used” well-beyond that point, as the kid is, pretty clearly, at least a teenager now. When does the helpful technology become abusive? Become an invasion of privacy?

Dom wrote:

Sheesh, you people are under delusions of grandure. To think that anything that goes on in the majority of our boring lives will ever be used for anything other than our own safety, is what we get for watching too much hollywood. When was the last time ‘helpful technology’ has abused anyone you know?

Tom Chi wrote:

Dom, I think that’s mixing up concepts. “Grandeur” would imply that what happens to us is important to everyone — which is not what is being discussed. The discussion is mostly about things which are *very* important to our mundane selves (and our immediate friends and family), but not to everyone. If someone I know was contemplating suicide, it is important to me — but I don’t have the delusion it is important to everyone.

That being said, your second point about helpful technology goes more to the heart of the discussion. If you were the child in the comic, you might not think such chips are helpful. Maybe you are a good kid but your parents are overprotective. Maybe you are just a regular kid trying to figure out the whole ‘independence’ thing and not wanting someone looking over your shoulder at every instant.

This is not about delusions of grandeur. This is about the way that surveillance (even done in a ‘benevolent’ fasion), will alter our world.

Brian Curtis wrote:

Then I expect those of you with “nothing to hide” to consent to wearing a videocamera strapped to your head all day, every day. 24x7. With all its records available on demand to any governmental official, as part of an official investigation, a whim, or pure voyeurism.

Dom, privacy really IS important. Sure, my life is boring; that doesn’t mean I have no right to keep it to myself. It also doesn’t give anyone else the right to track and shadow my every move. If a heart patient wants a tracking/monitoring device, fine. But not everyone wants or needs that.

The bigger concern might not be governmental monitoring so much as corporate use of our personal lives. Do you always accept Internet cookies? Do you register for every site you visit? Why not? Think about that… and then tell us again why privacy doesn’t matter.

Dan wrote:

“When was the last time ‘helpful technology’ has abused anyone you know?”
How about turning that one around? Whats the last ‘helpful technology’ that hasn’t been abused?

Nuclear power - nuclear bombs
Electric light - electric tazers
Campfire - Arsorny
Free speach - criminal networks
etc, etc.

Which is the whole problem really, pretty much every good invention also has a darker side where it can be abused, as an inventor there really isnt much you can do to avoid that either. Why would someone whos trying to *improve* the world be able to think in the same way that dictators/criminals/etc do?

As for “People with a clear conscience shouldn’t have any problems with it.” Would you really be comfortable knowing that some unknown person might be watching through that camera over the toilet seat? Would you feel comfortable being tracked when your going to a gay bar if your still “in the closet”? And thats not even considering the fact that if data is being transmitted it could also be hacked and used by third parties.

The question then is, how far can, or should we go? Take cameras in public areas, it would stop most types of crime, but it would also mean that the kids feel watched. Is it really a good idea to have 300+ kids feeling watched and untrusted every day where a drug dealer can just sell their drugs outside of the school just beyond the cameras view? Is that really a good argument for not having cameras when a kid gets kidnapped and murder/raped from school grounds?

Im wondering though. If you have the option of living in either a city which has automated surveillance that covers the entire city, or a city with no surveillance. Would that satisfy the sense of personal freedom enough? Or would that just create elitism areas where if you want cops you actually have to live in the area where your always being watched?

Robert A. Heinlein has a book about a similar idea, forgot its name. Its about a society where any display of violence would get you into forced counselling. Or you could choose to leave civilized areas and live in total anarchy. The difference there is that they dont really have surveillance, they “simply” ensure that all humans in society are stabile and wouldnt *want* to commit crimes. Pre-emptive law enforcement, sort of.

Dave wrote:

“When was the last time …”
Well, there is enough documentation of how the gov’t tried to track and then dismantle or otherwise make the free speech and anti-war movements fail. Should we talk about Watergate? I do not think that Hollywood is actual based on make-believe to be honest. There are enough cases of tracking “unwanted” yet very legal people. Take John Lennon and the FBI. There is far too much traceability already in this planet. I should be able to have the choice of whether or not I want to be tagged at this level.

It’s interesting that you bring Hollywood into it. I use sci-fi as a big determinant of whether or not a specific technology should or should not be followed. My determinant is whether or not that technology is ever used in a successful light (AI) or is it always used in a bad light (Cloning). Find me a positive use of tracking and Surveillance in sci-fi and maybe I’d begin to consider it. When the plausability of our imaginations can’t fathom something that is a big warning sign in my mind.

Now! there is the “star trek” version of all this, but it is still optional. Take off the badge and you are “free” to not be tracked, except by extremely “expensive” devices, so they try to make you think. Still it doesn’t seem abused too often.

Jens wrote:

Constant surveillance can make someone literally ill. Or so I remember an article I read years ago. They took some people (kids I believe) and let them live in a controlled environment. They could read, eat, watcht television, play, whatever they wanted. But they were always monitored by camera and microphone and they knew it.
After some time, they began to show behaviour that wasn’t normal and even developed some health issues. After the surveillance was cancelled (and they knew that too), their state returned to normal.

And, to quote (translated, doesn’t have an english bible) someone with more than twothousand years of authority on the subject of clear conciousness:”He who is without failure shall throw the first stone.”

Jens

Bob Salmon wrote:

Here’s something to ponder to do with Big Mother / nanny state / abdication of responsibility. What if, instead of helping foil kidnap etc, the RFID chip under your skin and associated system stopped you from eating too much of the wrong food? It would talk to your computerised fridge, the computerised check-outs at food shops etc.

I think it’s only a matter of time before this happens. (Note! Prior art here!) If this does I may disappear off to the hills and run my own smallholding.

Dom wrote:

Bob, see, even Tom and Barbera were being watched.

Lada wrote:

It’s not just the privacy issue that bothers me. It’s our mentality. What is happening to people who want to have sterile lives, where nothing (nothing!) goes wrong with them: no bones get broken, no files lost, no damage done, no tears shed. A cushion here, three cushions there. Suffocating. Fault-proof systems are a fairy tale. But I am afraid that fault-proof people are becoming a reality. Well, you know, I don’t want to live in a fault-proof world. And if someone wants to commit a suicide, let it be their choice. Freedom doesn’t come for free, but itís not a commodity.

Dom wrote:

I suppose you don’t wear a seatbelt in the car then.

Dave wrote:

Lada, wasn’t that point made quite well in “Finding Nemo”

Marlin: I promised that I wouldn’t let anything happen to him.

Dori: Well that’s a funny promise …
I forget the rest of the quote but it went something like, “well, that’s pretty boring.”

Bob Salmon wrote:

In case you missed it via Slashdot: your car could spy on you now.

Tom Chi wrote:

I think Lada’s point is that a life which is completely controlled and free of serendipity is both boring and bad for you. The problem is that when we try to stave off negative surprises, we also stave off positive surprises.

All this reminds me of Peter Merholz’s “design for serendipity”, where he pointed to a haphazardly laid out used book store as a special and useful kind of information achitecture. In such an architecture, you can’t find what you are looking for, but you really discover a lot of interesting stuff. A book about fluid dynamics might be leaning up against a book of 18th century illustrations which might both be stacked over a photobook of artic cloud patterns.

Going back to the thread… in a way, when we talk about this type of security we already make many assumptions about our world. We assume that there are scary enemies to guard against and assign a level of risk and a degree of fear to that threat. Now, the two things to consider are:

1) Is our fear justified giving the real risk of adverse scenario X, vs. all other types of more probably adverse scenarios? We need to ask this because the human mind biologically is set up to respond strongly to fear. It overcomes our rational abilities so we need to double check before we end up building a world based on our fears.

2) Are the threats immutable? Will the world always be populated with abductors, dealers, crackpots, terrorists, etc? All I can say to this is that in different societies the rates of bad deeds by these bad people vary considerably. I think there will always be some crazy people, but the way a society is organized and the values they promote can allow them to have more or less influence.

Lada wrote:

I do wear a seatbelt in my car, but I want it to be my business, not the police’s. I am *not* against Big Mother (I am pro- any choice). I am against Big Mommy not letting me choose to be an orphan :-) My concern with benevolent surveillance is our dependence on it. Humans (en masse) are lazy. Come on, we are. We don’t like thinking for ourselves if we are given an opportunity not to think. Benevolent anything leads to dependency. Once you are dependant, you are less inclined to think for yourself. Once you are immunised against a common cold, you don’t have to care about exercising and eating vitamins to prevent it. Your body depends not on its internal strength, but on the external help of the vaccine. Same goes with benevolent surveillance. Once you are dependent on a helpful advice here, there and everywhere, your ‘psychological immune system’, your internal strength starts deteriorating. You are trying to be stress-free instead of being stress-resistant. And this is the key difference to me.

Again, I am pro-choice. As long as I can choose my way to be stress-resistant, you can choose yours to be stress-free. But we both should have *equal* choices.

Maarten wrote:

To broaden it a bit, isn’t it valuable for a citizin to be on the other side and keep an eye on what a government does? With the integration of small camera’s in cell phones sinister governmental activities can come to light which would might not be discoverd in the past.

If we look to Iraq for example, we can see every detail of what is happening. Who is watching who? Are the media, and as a result, we as the public not the Big Brother nowadays?

Stephen wrote:

I hate to bring up something trivial after all these great posts, but I just re-read the comic and … isn’t the boy missing *legs* in panel 1??

Darth Bender wrote:

Stephen: that’s probably because he’s younger then

Bob Salmon wrote:

Another current example: Telecare.

BT (Exact) & Liverpool City Council. They put sensors in the rooms of old people living alone in their own homes and build up a model of their normal behaviour. If they deviate from this an automated phone call is triggered saying “You might have left your tap running” (temperature too high in the bathroom) “You might have left the fridge open” (temperature too low in the kitchen) or just “Are you alright? Press 1 for yes, 2 for no.”

There are benefits: the old person retains their independence. I assume any home help for the person continues e.g. to help them get dressed. But I also assume it’s a cheaper way of running it than having periodic checks by people.

The big cheese at Liverpool City Council I heard interviewed about this said that if it is cheaper then it means they have money left over to help more old people live at home or put into other social needs.

They interviewed sommeone with the system in her house and she didn’t feel a prisoner or spied upon, and seemed to like the feeling of security.

So, is the principle of avoiding automated big brother worth not giving this kind of care and independence to senior citizens (in a world of finite resources).

David wrote:

The kid does have legs in panel 1 — they’re the short blue section between his green shirt and white shoes, partially obscured by his mother’s right hand.

Stephen wrote:

Ah, I see my problem … I was seeing the white section near the bottom of the shorts as part of the shorts, whereas they are actually his *shoes*.

*blush*

Dom wrote:

It kind of reminds me of that bit in the beach, where their freedom is great, until one of the fishermen gets his leg bitten off by a shark. That’s when he would sacrifice his freedom for a bit of big mother.


Leave a Reply


OK/Cancel is a comic strip collaboration co-written and co-illustrated by Kevin Cheng and Tom Chi. Our subject matter focuses on interfaces, good and bad and the people behind the industry of building interfaces - usability specialists, interaction designers, human-computer interaction (HCI) experts, industrial designers, etc. (Who Links Here) ?