Posted by: Ian Angell
I used to be very annoyed whenever spam mail managed to penetrate my mail filters. But no more! In a flash of enlightenment I found myself in awe of the sheer ingenuity (as well as the brass nerve) of the spammers. Now I see spam as an amazing phenomenon. Spam seems to pop up out of nowhere - although the ISPs, Google, Uncle Tom Cobbley and all, are of course involved.
I started collecting spam, studying it! Of course I make sure to use (free) e-mail accounts extraneous to my normal Internet existence to avoid disruption. Thanks to Google, Yahoo, Hotmail that's easy. Now I send e-mails consisting of one word ('holiday,' or 'car,' or 'restaurant,' or 'loan'), using one unique e-mail account for each experiment (and not using that account for anything else), and wait to see what pops up - then drawing charts of the rate of incoming spam. Some of the accounts I've set up are left unused by me, so I can attract in and have a control experiment of the type of spam e-mails that are out there in the white noise.
I've just challenged the students on my course Global Consequences of IT at LSE to see who can attract the most spam - the winner gets a free lunch at a local Thai restaurant. I'm interested in the various strategies they can devise to maximize spam attraction.
Why not try this out for yourself? Set up your own laboratory, and learn to love spam! The internet is truly a wondrous ecosystem composed of the most fabulous creatures - spam is just one. It's fascinating getting to grips with their Natural History.
To this end I will be setting up a separate blog under the name WebCoherence to invite anyone interested to join in the experimentation. I've still to work out the practicalities, but if I can interest a large number of people, then we can focus 'The Human Computer' at the issue, and find out some very valuable statistical information about the reality of how the web operates.
Monday 9 March 2009
Learning to love Spam
Sunday 5 October 2008
The duality of Digital Identity: We ‘r’ what we do!
The ‘science’ of digitization follows from the worldview of Newtonian Mechanics; i.e:
- Materialism: All that matters is ….. ‘matter’ (i.e. tangible / quantifiable)
- Reductionism: Take it apart and analyze the pieces
- Determinism: We can predict the outcome
But to understand even the ‘gene’, the atomic representation of an individual’s identity, this world view is unsophisticated:
“When a gene product is needed, a signal from its environment, not an emergent property of the gene itself, activates expression of that gene.”
(Nijhout, 1990)
In fact, according to 'Quantum entanglement':
"The peculiar situation is:
the best possible knowledge of a whole does not necessarily include the best possible knowledge of all its parts
even though it may be entirely separated....."
(Schrodinger, 1935)
Hence, the objective syntax & semantics of mediating artifacts that facilitate the storage, communication, aggregation and processing of ‘personal data’ are in ‘tension’ with the impressionable, subjective and indeterminate but holistic perception of ‘personal identity’.
There is a significant increase in the (digital) devices and applications that individuals use to transact and interact. Also, “my”, voluntary (e.g. online registration, blogs, etc.) and informed (credit card transactions, online medical records with the NHS, etc.) or involuntary (e.g. bank sharing account details with the tax man) and unknown (e.g. profiling my online behavior and selling it to marketing companies) are ‘traces’, over time, of my activities in the digital environment. In other words, the digital identity of the person grows.
Ownership of my ‘identity’ is expressed as a need to control these traces of digital activities, individually. But the sustained management of the dispersed multiplicity of these traces is humanly impossible. Hence the individual can feel a sense of greater than objective loss for the ‘violation’ of a trace of her/his identity.
Hypothesis:
'There is a correlation between the increasing awareness and assertion by individuals of their ‘right of and to identity’ and the frequent use by ‘others’ of their digital data(s).'
Sunday 28 September 2008
It will all end in tears
Posted by: Ian Angell
Bear Stearns, Lehman Brothers, Merrill Lynch, Fannie Mae, Freddie Mac, then AIG and Washington Mutual – and over here Northern Rock, HBOS, and Bradford & Bingley – and so many, many more. Newspaper headlines scream out that the markets have failed. Nonsense. The turmoil we are seeing is the markets finally operating properly again. For get one thing straight: they haven’t been working over the past few decades of greed and degeneracy. But you can always rely on markets to reassert themselves.
Since the early 90s I have been warning that all this nonsense would all end in tears – I am called “the Angell of Doom” because of my predictions about the control freaks who think they are immune to the uncertainty implicit in the real world. Number mysticism, aided and abetted by computer technology, has turned the world’s financial markets into a huge global casino. First we had ‘the chartists’, who claimed to predict stock movements by pattern matching, as if the market was some recurring dendrochronology. Then came the ‘Masters of the Universe’ and their mathematical models. Developed by the likes of Nobel Laureates Robert Merton and Myron Scholes, these methods are designed to beat the system. Accordingly, hedge funds leverage already huge amounts of money into astronomical sums that are then placed as bets. These ‘big swinging dicks’ of Wall Street and the City of London believe in the mathematical guarantee of ‘riskless risk’, that they can beat the system.
With the vast sums involved even very small percentage gains turn into a tidy profit. Indeed they did win big in the 1980s and 1990s, which is why the banks have been happy to ‘lend’ them ever-increasing sums of money. Then in 1998 along came Long Term Capital Management (LTCM), which had leveraged its $4.5 billion into a $1.25 trillion bet, suddenly lost 44% of its capital. Only swift action by the US Federal Reserve Bank avoided global financial Armageddon. Banks had to write off hundreds of millions of dollars, but that was child’s play to the subprime mortgage nonsense of 2008, the credit crunch, and the ‘shorting’ of bank stocks.
These financials models and assorted trickery can only ever be a pale shadow of what actually happens, and can never emulate the subtle, and not so subtle, checks and balances and the feedback of unknown and unknowable interactions. It seems that neither the chartists, nor the masters of the universe had ever heard of Goodhart’s Law. Charles Goodhart, a distinguished LSE Professor of Economics, says: “any observed statistical regularity will tend to collapse once pressure is placed on it for control purposes”. What he is saying is that you cannot mix up cause and effect. Any observed regularity in society is an effect, but the moment you measure it, and use that measure as the basis of control, you are making the false assumption that it, (the regularity), is the cause of your observation.
Underlying all the self-assured mathematics of financial instruments and hedge funds is a manipulation of observed regularities. However, once the sums involved had become so massive, the gamblers had in fact changed a bet into an attempt to control the system. Collapse was inevitable. How could they believe that computerised mathematical models are some kind of computerised Viagra for business? Ten years ago I labelled the bonus-swilling Masters of the Universe as just a bunch of dick-heads, although the damage they have caused is far worse than even I imagined.
The collapse we see is not the markets failing, quite the contrary – the markets haven’t been operating freely for quite some time, and this always creates tensions that eventually will be released, sometimes catastrophically.
All these government bailouts are yet another bunch of control freaks thinking they can manipulate the market, and deny Goodhart’s Law. Fat chance. Always, the will of the market will out. More tears before bedtime I’m afraid.
Friday 22 August 2008
Be Afraid! Be very Afraid!
Posted by: Ian Angell
Be afraid! Be very afraid! Paranoia. That’s what I learned at this year’s back-to-back Black Hat and DefCon conferences in Las Vegas – among the computer world’s premier security events. In the former, hackers line up to tell Chief Security Officers of the latest vulnerabilities in their companies’ computer systems. In the latter, the hackers tell each other of the latest ‘cool’ flaws.
And those vulnerabilities range from the sublime to the ridiculous. Over coffee, a pasty faced youth in dreadlocks enthused over his discovery that by passing certain sequences of electronic signals into the XXXX chip, he could bypass the security and learn all its secrets. A more soberly dressed pair of presenters told of how a certain banking system contained a very embarrassing flaw. Pay a negative sum of money into another’s bank account, and that amount flows back into yours!
At Black Hat we were invited to access the Internet via a free but hostile wireless network that was ‘aggressively monitored.’ If they managed to hack into your system then you were shamed with your name placed on the ‘Wall of Sheep.’ It was a long list. A coward, I used a wired network, but as it turned out that too got hacked!
The talks listed vulnerabilities in system after system, many I had never heard of. But not hearing of them was no comfort – they were all deeply embedded, fundamental parts of the computer systems I use every day, or in the banking system that holds my hard earned money, or in prescription drug dispensing systems, or in heart pacemakers, or in Radio Frequency Identification (RFID) cards and similar chips. Think of the recent problems with Oyster cards. And its not just malicious attacks on such systems – the intrinsic and ever-increasing complexity means that cack handed attempts at correcting the faults can be just as devastating. It’s not just TfL who has problems. As I am writing, the Massachusetts Bay Transit Authority is seeking a restraining order to gag three students from MIT talking at DefCon.
Everyone is in denial. I was being told that every system is compromised, from top to bottom - from the most sophisticated software layer to the lowest level of electronic activity. It seems that only the threat of legitimate violence by the state against troublemakers was keeping the show on the road. Attending the ‘Meet the Feds’ panel was a must, but I failed to get in.
Crowds of Hell’s Angels, goths, tattooed mohawks in tartan, multiple body-piercings, adolescent geeks and pony-tailed denim-clad pensioners had already crammed the room. Exasperated fire marshalls ushered the excess audience away into other more esoteric presentations, normally the reserve of smug in-crowds. No matter, the chief Fed thoughtfully wore a big sheriff’s star on his chest, and was happy to talk later with all who approached him … although his miserable message was of a tidal wave of security threats.
Be afraid! Be very afraid! I’m seriously considering abandoning the Internet, and taking up knitting.
Monday 4 August 2008
Ian Angell on Intellectual Property Rights
Part 1
Part 2
Wednesday 30 July 2008
The Greatest Thing Since Sliced Bread!
Posted by: Ian Angell
As a professor in a department that researches innovation, I am increasingly being asked about this hot topic. Rather than giving a bald ‘definition,’ let me talk around innovation, and that may hopefully clarify things.
Let’s get one misconception out of the way immediately: that creativity comes with the lone genius having a flash of inspiration, … the light-bulb moment! It’s a lot more complicated. Innovation isn’t a single event, rather it’s a continuous process. I’m with Thomas Edison: “genius is 1% inspiration, 99% perspiration.”
The creative process was mapped out in three stages by the nineteenth century German scientist, Herman Helmholtz as: Saturation, Incubation, and Illumination. French mathematician Poincaré added the extra notion of Verification.
Saturation: fill the mind with the problem – until the point where extra data won’t take you any further forward.
Incubation: Keep thinking - mental activity must continue, even subconsciously.
Illumination: The ‘light bulb’ comes on. This is not some single moment, but the result of a long drawn out process.
Verification: even then the idea must be checked empirically.
Numerous other authors have extended this list over the years:
First Insight ➔ Preparation ➔ Saturation ➔ Incubation ➔ Illumination ➔ Verification ➔ Evaluation ➔ Elaboration.
Here the original list of four has been topped and tailed with:
First Insight: a vague recognition
Preparation: collecting the necessary resources
Evaluation: is the result of any use/value?
Elaboration: taking it further; adjustment, expanding its utility.
Two further entries hover over the whole process:
Determination: keep going despite frustrations and set-backs.
Context & Timing: being in the right place at the right time, so that you can convince others to use your invention and that they begin to see it as “the greatest thing since sliced bread.”
And what is so great about sliced bread? The great invention shouldn’t be called sliced bread at all, but ‘pre-sliced’ bread. Whatever we choose to call it, its invention exhibits all the above features of innovation.
But that wasn’t the whole story. The clincher turned out to be a particular electric toaster invented by Charles P. Strite, although his toaster wasn’t the first. That accolade seems to belong to a British firm: Crompton & Co in 1893. In the intervening years many more types were produced, among the most notable being the D-12 introduced by General Electric in 1909. {Toasters have an absolutely fascinating history, and I wholeheartedly recommend that anyone interested in innovation should visit http://www.toaster.org/, where they will get a dynamic and particular sense of how innovation progresses.}
During World War I, Strite worked in a factory where each day he saw toast being burnt in the cafeteria. The problem was people took their eye off the bread as it toasted. His solution was a toaster that did not require human attention: the Toastmaster! This was a spring-loaded, automatic, pop-up toaster with a variable timer. Sold to restaurants from 1919, it hit the retail shops in 1926.
Using Rohwedder's machine, pre-sliced Wonder Bread was in mass-production by 1930. Sliced bread in waxed paper wed to the Toastmaster was a marriage made in capitalist heaven. Its market penetration following the Wonder Bread advertising campaign is legendary. By 1933, 80% of all the bread sold in the United States was pre-sliced and wax wrapped.
Here we have a clear example of how no product of the creative mind comes into existence in a flash, or in a vacuum – it co-evolves with other artefacts. Every invention and creation stands on the shoulders of past giants; but it also needs the popular acceptance of other prior inventions, which together spark interest in the marketplace. However, if access to those inventions is restricted, then there will be no experimentation, no variation, no creativity.
Most innovations are applications/variations that spring from prior innovations, exactly as Herbert Kroemer, the 2000 Nobel Physics Laureate, explains in his Lemma of New Technology: “The principal applications of any sufficiently new and innovative technology always have been – and will continue to be – applications created by that technology.”
The same can be said of innovation in general. Innovations do not come from orthodox creators, but from users on the margins, who are free to experiment with radical ideas. What Kroemer is implying is that although a technological innovation occurs in a particular context, numerous people must run with that innovation to create a whole raft of derivative applications, which could not even be imagined by the original inventor. If that inventor restricts what can be done with his work, in effect banning derivative works, then he is limiting its potential, and cutting off all future revenue streams.
There is no knowing in advance what the really useful applications will be. There needs to be wide-scale experimentation – the more the merrier. Natural selection will bring the successful to the fore. According to Saul Godin, unless the idea of the innovation’s utility has captured the imagination of the market, unless the idea has spread, then there is no selection – natural or otherwise – and so nothing happens. Over-charging for, or overprotection of intellectual property will ensure that the innovation stays in the wilderness.
Suppose that Crompton & Co. had been allowed to copyright the concept of an electric toaster in 1893 – then derivatives would have been blocked; Strite would not have created the pop-up toaster; and Rohwedder’s sliced bread would not have achieved the status of being “the greatest thing!”
Photo permissions, with thanks:
Otto Rohwedder: Frank Passic; http://www.albionmich.com/
Charles Strite and Toastmaster: Eric Norcross; http://www.toaster.org/
Wednesday 16 July 2008
No Pearls in this Oyster
Posted by: Ian Angell
On Saturday
12th July, between 5.30am and 9.30am, at least 60,000 passengers who swiped their Oyster Card, Transport for London’s pre-payment system, had them corrupted.
To avoid rush hour chaos on Monday, bus and tube commuters travelled for free if
their cards registered an error.
So the Oyster Card system failed. Surprise! Surprise! The only property that systems have in common is that THEY ALL FAIL … eventually. It’s not a question of if, but when. And the bigger the system, the greater the opportunity of failure.
I and colleagues at the LSE tried to warn government ministers about their national ID card scheme. All we got for our trouble was slander and abuse. Such offence is typical of those who subscribe to the “pixie dust” school of technology: computation is a magic substance to be sprinkled over problems, that, hey presto, vanish.
Systems are much more like a life form: they are entropic, they degrade over time. In the case of databases, they pick up errors, and then data error compounds data error. For instance the DVLA in Swansea admitted in 2006 that a third of entries contained at least one error, and that the proportion was getting worse.
It’s caused by the complexity in the interaction between computer installations and human activity systems. We've all had encounters with computers getting it wrong. For years the banks insisted that ‘ghost transactions’ with their ATM machines were frauds by cardholders, when they were in fact system errors.
Usually the minor day-to-day problems with a system are resolved by a sensible employee, rather than by the managers who administer the system. There's a duty of care for the company to take bus passengers home when they find themselves stranded in a remote spot because their card has unexpectedly run out for whatever reason - especially given that there is no display to show how much is left on their card when they "touch in" on a reader in the middle or at the rear of a bendy bus, or on the card itself. The fact that the cash fare is exorbitantly more than the Oyster fare, bullying any regular traveller into choosing the Oyster, reinforces the need for an on-card readout.
This does raise the question of whether there are other problems with the Oyster Card, but on a much smaller scale. TFL couldn’t deny this weekend’s shambles, but can we be absolutely sure that no previous case of fare dodging was a system failure?