Categories
Conference Notes Politics Technology

Live from ETech: iRobot…

For the most part the ETCon keynotes are pretty much high-concept fluff. They’re fundamentally high-profile, high-glamour bits of hardcore tech that (often) are completely outside the practical experience of the so-called Alpha geeks that attend these events. But they have their value – they’re designed, I imagine, to be more brain-openers than brain-developers, they’re there to extent the aspirations, intentions and creativity of the people who attend the event rather than to be of direct use to them. Nonetheless if you’re not blown away by the technology or awed by the future tech on display, they can seem like more of a waste of time. Bring on the stuff I can actually use…

Last year the troubling session of this kind was from K. Eric Drexler on Nanotechnology, which most people had already read about in great length but there wasn’t a lot of apparent movement upon. The geeks in the room were interested in the theory but wanted results or something they could participate in. Intrigue fought with frustration and in the end – I think – frustration won. This year that balance was never more in evidence in the second keynote of the morning: Robots: Saving Time, Money and Lives.

Helen Greiner from iRobot Corporation came on stage and seemed surprisingly nervous. She started talking about the Roomba automatic robotic hoover and did so at considerable length. The immediate interest (“I want one”) faded quite rapidly as people gradually tired of the technological challenges of sensing walls, picking up dust and getting in close to the walls. Watching something of technological interest but distinct from the activities of most of the people in the room just seemed to gradually cease being that fascinating. But all that changed when she moved onto the military applications and particularly the Packbot [See the brochure].

The first reaction to the Packbots is fascination and a certain amount of awe. Comments like “I’ve seen this movie!” and “I want one” mix with awed responses to the robustness of the devices concerned. A video is shown where a Packbot is thrown through a window, lands with a thump, bounces a bit, rights itself, looks around and wanders off. One zooms up a staircase. One falls from a second story window and survives intact. Murmurs of delight from the audience at the new toy on offer reverberate through the room.

But gradually the mood changes and anxieties start to appear. Questions about the applicability and potential uses of the technology start to collide with the natural utopianism of the geek audience. What will these robots be used for? Who will control them? Where are the controls? It’s not immediately clear exactly where the anxiety is coming from – we all appreciate that weapons have to be built, that there is a need for the armed forces. But there seems to be something different about using robotics. Thinking about it I come to the conclusion that maybe it’s about a sense of automated killing – an absence of human presence that makes the whole thing resonate with the increasingly mechanised processes of death that echoed through the last century. Is keeping people further out of the equation actually a good idea? Does it discourage or encourage conflict if your side can eradicate another country without suffering any losses at all? Those human horrors of shell-shock and war-weariness – the insanity caused by human-upon-human violence suddenly seem to me almost preferable options – deterrents to conflict designed to stop us arbitrarily exterminating people and going to war.

I’m not going to judge the people involved – I don’t have that right. We all know that warfare and the technologies of warfare must evolve and adapt. The arms race still exists, and will continue to do so as long as state feels under threat from other states or from terror-attacks. It’s just that I didn’t expect such an early brain-opening session to ring such alarm bells or to give me such concern for the future… On occasion, this country I’m visiting feels like it believes itself to be under seige – like some kind of gated-community surrounded by paramilitary, robotic guards…

Categories
Business Radio & Music Technology

On the benefits of competing audio formats…

There’s a fascinating clump of posts going around the place at the moment about the various DRM-based digital audio solutions that you can buy at the moment. The one that kicked stuff off initially was a post on The Sobleizer (A challenge for webloggers: handling organizational difficulties) which included a chunk of stuff about why it’s best for people who are going to buy music files with DRM to buy them in Windows Media format. Here’s the main chunk of the argument:

When you hear DRM think “lockin.” So, when you buy music off of Napster or Apple’s iTunes, you’re locked into the DRM systems that those applications decided on. Really you are choosing between two competing lockin schemes.

But, not all lockin schemes are alike, I learned on Friday. First, there are two major systems. The first is Apple’s AAC/Fairtunes based DRM. The second is Microsoft’s WMA

Let’s say it’s 2006. You have 500 songs you’ve bought on iTunes for your iPod. But, you are about to buy a car with a digital music player built into it. Oh, but wait, Apple doesn’t make a system that plays its AAC format in a car stereo. So, now you can’t buy a real digital music player in your car.

(I should mention at this point that Scoble works for Microsoft, but I’ll say straightaway that I don’t think that’s particularly relevant to the argument at hand. Nonetheless, cards on the table.)

So the argument at this point is if you choose lock-in with Microsoft, then your music files will work on a wider variety of media than if you choose lock-in with Apple. Therefore you should choose lock-in with Microsoft. At which point BoingBoing’s Cory Doctorow weighs in:

In this world where we have consumer choices to make, Scoble argues that our best buy is to pick the lock-in company that will have the largest number of licensees.

That’s just about the worst choice you can make.

If I’m going to protect my investment in digital music, my best choice is clearly to invest in buying music in a format that anyone can make a player for. I should buy films, not kinetoscopes. I should buy VHS, not Betamax. I should buy analog tape, not DAT.

Because Scoble’s right. If you buy Apple Music or if you buy Microsoft Music, you’re screwed if you want to do something with that music that Apple or Microsoft doesn’t like.

Cory’s argument then is the fairly commercially radical proposition that we should buy only open music files, that companies should sell open music files (there is a precedent here – Bleep sells DRM-free songs from Warp Records), and even that companies like Microsoft should be using their substantial legal power to fight the record companies to be able to sell DRM-free songs online.

Now I’m not going to argue with that, although – to be fair – I think the current climate makes it pretty unlikely to happen. The various companies concerned are too neurotic about it, and frankly Microsoft has too much to lose from the proposition that intellectual property should be distributed without arcane DRM attached to it. Instead I’m going to argue that even if we’re only given the choice between two DRM schemes, we should still not just automatically go for the one that plays on the most devices. Because what does this mean in the end? No more or less than yet another monopoly at the operating system level – the musical infrastructure ends up belonging to Microsoft.

The fact is we shouldn’t think in those terms at this stage. We should be trying to create miscegenated musical libraries that we expect digital music manufacturers to support all of, not just some as it suits them or as it suits whichever company ends up dominating the market. We’ve been down this parth before – the company that owns the monopoly has the least to gain from a rapid pace of innovation, the least to gain from being standards compliant. We’ve seen it at the level of operating systems, internet browsers and now we’re seeing attempts to own and define the one successful format in which music files could sit for the next few decades. These things are too important to be left in the hands of one company. We need to have consumer choice at the level of which DRM (or lack of DRM) we’re comfortable with buying, we need variety so that different types of audio file can be released via a variety of business models, we need variety – fundamentally – because otherwise we all lose.

The examples that people cite about competing formats no longer hold true for music. It’s not like VHS and Betamax – we’re not talking about hardware with different sized slots that you can only fit one kind of music delivery system into. No – with music we mostly have applications on our desktop that can play dozens of different formats – whether we notice it or not. Just the other day, RealOne announced that it could now play Apple-encoded AAC files, and the rumour is that HP’s deal with Apple required that the iPod should have its ability to play WMP files restored. These things can play more than one type of file and we should be doing our damnedest to make sure that continues to be the case. It should be obvious to car audio manufacturers that they should be able to play AAC tracks – that there are hundreds of thousands of people across America (and soon Europe) who are going to want to be able to do more things with their bought songs. And it should be obvious to all of us that we want a world in which new formats can be integrated into our listening without any particular effort, or at least without us having to rebuy all our old tracks to work on non-mutually functioning players.

So in the meantime, buy, steal or rip whichever tracks suit you best in whatever format you want and make it your mission to put pressure on all the players (both business players and audio players) concerned to support as many of them as possible as soon as possible. And don’t listen to anyone who says that having one organisation controlling the musical infrastructure will result in greater choice. That’s never been the case in the past, and I very much doubt it will be so in the future either.

Categories
Design Personal Publishing Technology

Using Wikis for content management…

So here’s a thought partly inspired by an e-mail from a work colleague and partly by Haughey.com. Creating and editing wiki pages is extremely simple and elegant once you get past the first 30 minute learning curve. And essentially you end up with a page that’s got an incredibly simple template, pretty well marked-up code (or at least could do if you used the right Wiki system) and can be edited incredibly quickly. Now, imagine for a moment that the Wiki page itself is nothing but a content management interface and that the Wiki has a separate templating and publishing engine that grabs what you’ve written on the page, turns it into a nicely designed fully-functioning (uneditable) web-page and publishes it to the world. It could make the creation of small information rich sites enormously quick – particularly if you built in FTP stuff.

Now one of the problems with using Wikis generally is that they don’t lend themselves to the creation of clear sectionalised navigation. Nor do they do naturally find it easy to use graphic design, colour or layout differently on separate pages to communicate either your context or the your location in the site. That’s not to say that Wikis are broken, of course, just that the particularly networked rather than hierarchical model of navigation that they lend themselves towards isn’t suitable for all kinds of public-facing sites (the same could be said of the one-size-fits-all design of the pages). This would clearly be a problem. Wikis sacrifice that kind of functionality on the whole in order to gain advantages in other areas (ie. collaborative site generation and maintainance). Without those advantages, you’d simply be left with an inferior product.

So how to integrate design and architecture into the production of a wiki-CMSed website? Well, it’s not a particularly new question with regard to wikis generally – loads of suggestions about how some kinds of hierarchy could be built in have been made and some of them implemented. On the whole they’ve not been terribly successful as they present a higher level of user-level complexity, and with a lot of potential naive users, publically editable wikis can’t really afford complexity. But that’s not true if only one person or a small group were to be updating the site. The complexity level could increase a bit and the learing curve would have to be just a little steeper initially.

Here’s an example of how you could create hierarchy and utilise different templates at the level of the individual page. First, imagine a templating interface that allowed you to create an outline hierarchy of the various sections of a site (just like you’d produce in the outline view of Word or using something like OmniOutliner). Now, each section of that site-map could have a distinct template attached to it, or inherit a template from the section above. Then all you’d need on the Wiki-page (as content-management interface) would be a drop-down box on the right that allowed you to choose which section the page you’d created would sit under. Given that, you could use the mechanics behind the templating engine automatically generate a variety of different models of hierarchical navigation and breadcrumb trails which you could embed into your templates (you could use a templating mechanism very much like the one used to move content chunks around weblogs using Typepad). And the same part of the Wiki page that you use to decide which section the wiki page should be contained within could also house a .gif thumbnail of the template for that page. And the assigned section of a new page could even default to that of the page from which you created it – forward-link from a page about Troubleshooting (in the section “Help”) to create a page about Error Messages, and Error Messages is automatically created inside the “Help” section initially. And all of this could then be ‘published’, pushing everything out in a lovely stylish elegant and visually rich format to the rest of the world at the push of a button.

Wouldn’t that be cool? Blogger-style management for all kinds of other sites… The only things that don’t seem obvious to me at the moment is how you make the intra-wiki links not look like Wiki links to the general public while preserving the ease of use that they engender for the person creating the pages… Any thoughts?

Categories
Technology

Letters, Data and Metadata…

Considering how annoying I find The Social Life of Information (again – more on that later), it’s surprising how often I feel that I should be posting some of the nuggets contained within it for a larger audience. Anyway, there’s a really interesting quote in the book from Paul Duguid’s trip report from Portugal which I think is pertinent to my other post (A fragment of a world full of metadata) on the vast amounts of metadata that the real world supplies us with around the edges of the ostensible ‘content’. But then again – as I say – I find much of the book so aggravating that I’m not sure quoting a chunk of it to support one of my positions is a particularly inspired idea.

I was working in an archive of a 250-year-old business, reading correspondence from about the time of the American Revolution. Incoming letters were stored in wooden boxes about the size of a standard Styrofoam picnic cooler, each containing a fair portion of dust as old as the letters. As opening a letter triggered a brief asthmatic attack, I wore a scarf tied over my nose and mouth. Despite my bandit’s attire, my nose ran, my eyes wept, and I coughed, wheezed and snorted. I longed for a digital system that would hold the information from the letters and leave paper and dust behind.

One afternoon, another historian came to work on a similar box. He read barely a word. Instead, he picked out bundles of letters and, in a move that sent my sinuses into shock, ran each letter beneath his nose and took a deep breath, at times almost inhaling the letter itself but always getting a good dose of dust. Sometimes, after a particularly profound sniff, he would open the letter, glance at it briefly, make a note and move on.

Choking behind my mask, I asked him what he was doing. He was, he told me, a medical historian. (A profession to avoid if you have asthma.) He was documenting outbreaks of cholera. When that disease occurred in a town in the eighteenth century, all letters from that town were disinfected with vinegar to prevent the disease from spreading. By sniffing for the faint traces of vinegar that survived 250 years and noting the date and source of the letters, he was able to chart the progress of the cholera outbreaks.

His research threw new light on the letters I was reading. Now cheery letters telling customers that all was well, business thriving, and the future rosy read a little differently if a whiff of vinegar came off the page. Then the correspondent’s cheeriness might be an act to prevent a collapse of businss confidence – unaware that he or she would be betrayed by a scent of vinagar. (Chapter 7 p.173)

Categories
Science Technology

Enhanced reality: Noise in Space?

So it occurred to me (while watching some dumb sci-fi TV series set in space) that maybe spaceships that make noise in a vacuum isn’t such a dumb idea after all. I mean, obviously they wouldn’t (couldn’t) make any noise, but there would be all kinds of reasons why it would be in the best interest of neighbouring ships to simulate the sensation. After all, noise can convey all kinds of useful information – different guns make different noises, different engines make different noises, you can tell the location – perhaps even the speed – of an object by pure noise alone. If we were to assume that – in space – the computers and sensors on ships would most likely be taking in much more information than a human could easily assimilate through a visual interface, then it makes total sense that you’d try to deliver some of it through sound. In fact it seems astonishing that you wouldn’t!

In such an environment – detached from everything outside your pressurised container by metal and vaccuum – the only sense that you’d otherwise have much use for would be sight. Smell would be pretty much redundant, you couldn’t reach out and touch anything and taste (bluntly) wouldn’t be that useful. Even the limited amount of motion senses that we have would probably be quite dramatically interfered with by the unfamiliarity of space and either an absence of, or a highly localised and disorientating forms of, gravity. That being the case – making use of a sense that would otherwise have very limited input would seem to be eminently practical and useful. Overlaying this enhanced – information-delivering, but yet still artificial – reality over normal video footage would create an outer-space that was more obviously comprehensible to human beings. That simple layer of mediation would help transform the insanely complex and alien into the routinely prosaic (this being – after all – precisely the reason that TV series put the noise in). So From now on I’m going to pretend that’s what they’re doing when the Romulan ships let off a volley of patouieee-ing distruptor blasts. I’m going to pretend they have a special insight into the world of the future and the ambient interfaces that they might use. I’m going to remark to myself, “How clever they were to think of that!”

For more information on various kinds of enhanced reality, you might try out some of these links:

Categories
Technology

Quick thoughts about global undo…

If only I had time to give this the attention it deserves – but alas, I must soon get drunk. Neat Chris from anti-mega has brought into public attention that massive and aggravating UI problem that is what happens when you accidentally quit an application (like Safari) that allows you to have many different pseudo-documents open that are lost immediately without any kind of dialogue when the application quits. The reason I call them pseudo-documents is because the standard behaviour for something that you can edit in an application is to ask if changes should be saved before quitting. That’s not the case with tabs in browsers. If you accidentally press Apple-Q instead of Apple-W (to close an individual tab), you lose all the pages that were currently open in the browser (and – because the windows you have had open recently doesn’t map neatly onto the things in your history you can also lose all information about how to easily find out both what they were and any information about them).

Chris’ answer to this problem is the OS-wide global-undo facility, where you could simply undo your quit. Hammersley’s been talking about it too. I think this is the wrong approach – and not just because I think that it’s not going to happen for the next ten years at least, even if it’s possible – but also because I think there’s a better way.

So here’s my question: Why does your browser lose its current status when it quits? Or to put it more precisely, When I restart my browser, why doesn’t it still have all the pages that were open in it when I last quit? Certainly this should be possible – and it would solve the problem (although it might be considered non-standard behaviour). NetNewsWire doesn’t forget my subscriptions when I restart it – so why should my browser? (It’s not a direct analogy, but it makes a point.)

I’m sure there are a number of privacy reasons why this kind of thing could be a problem, and it might break the ‘session’ / ‘global’ distinction if not handled appropriately – but you could make it a preference that people turned on or off on their own computers, with the sites refreshed when you logged back on again, perhaps? I mean, that should work, right?

Categories
Technology

Cameras communicating with Cameras…

So here’s a dumb idea about digital cameras. Let’s imagine a world in which everyone has a camera – and they carry them with them all the time. Say – for example – that they’re built into mobile phones. Right. Now you add in a sensor to each camera that means that they can communicate with all cameras within a narrowly focused area that corresponds with the area about to be captured within the viewfinder. Right. Now every camera includes information about how the person who owns it “feels” about various uses of their images. They can say, “I don’t feel comfortable with you distributing this image to your friends” or “Don’t take pictures of me” or whatever. Maybe even “no close-ups”. This information is thrown out to any camera that tries to take a picture of you and this has an influence on how the picture can be easily used.

So – for example – if I were a private nervous person who didn’t want photos taken of me at all, then I could set my camera to a ‘leave me alone’ mode. If someone tried to take a picture of me on a “normal” setting, then they’d find that their camera simply wouldn’t work. They’d keep pressing the button, but would be presented with error beeps instead. They’d have to actually switch to a “rude” mode in order to be able to take a photo. And if you didn’t want it to be distributed, the phone would just stop you forwarding it to other people – again unless you were prepared to switch into a “rude” mode. Could be fun…

Categories
Technology

Highly unoriginal thoughts about mobile devices…

Notes from a conversation with Dan Hill pertaining (in particular) to address books on mobile phones. I make no claim to their originality or their novelty. Almost certainly they’re on page six of a really well known influential book that I almost certainly should have read by now…

Thought one: The mobile phone address book as a web of trust. This is really trivial, but it’s also really powerful – the telephone numbers in your mobile phone all identify actual people (however you decide to encode the metadata of their names). The telephone number is like the unique id number that you give a field in a database. So what does it mean if a pair of phones have each others numbers in their address book? Doesn’t it imply a relationship? Perhaps even a similarity? Maybe it even means that you’re more likely than average to like each other? So if you pinged every phone that’s got internet access (and the phone was happy for you to do this) you could pretty easily make a social network map of pretty much everyone in the country. This is not a new idea.

Thought two Self-assembly address books. So you’ve lost your phone and with it you’ve lost all of your numbers. So you ring up two or three of your friends and they amend their record to your new number and you add their numbers to your phone. Then you trigger the ‘fix my address book’ trigger and sit back and watch. Your phone pings your friends’ phones. Their phones ping their friends’ phones. Everyone who has your old number in it is informed of your new number, and they ping your phone and build in the reciprocal links. And those people who appear most interconnected between the groups of friends you’ve mentioned are also added to your phone. An instant sense of your social network. An instant way of grabbing your local space… This is probably not a new idea.

Thought three Distributed 192. 192 was (until very recently) the telephone number for directory enquiries in the UK. You ring it, tell them the name and address of the person you’re looking for and they give you a number. Brilliant. Except if you don’t have their address of course. And it costs money and stuff. And it doesn’t work with mobiles. So what if instead of doing that, you typed in a search term, “Coates” into your phone and got it to ping everyone in your address book, aggregate the results and display them to you. Wouldn’t that be easier? I don’t know whether this is a new idea or not. I would doubt it.

Thought four Collaborative work over mobile phones. So you’ve got a web-of-trust and you have a communications medium. So basically that’s friendster then with a rather more intensive old-skool version of instant messaging (let’s call it “speech”). I wonder if there are people out there working on social software for phones. Or maybe social software that doesn’t actually have much of a human interface at all, something that’s really collaboratively sense related. Like a cyber-pet with two buttons that you can press – one if you really like a place and one if you really hate it. And then that’s geocoded and shared through your web of trust (because you’re similar to people you know). When you go into a place that everyone dislikes, your cyberpet freaks out. And if you go to a place that everyone likes, it starts to purr pleasantly in your pocket… I bet someone has thought of that as well…

Categories
Journalism Location Social Software Technology

Don't write off Conversations as a geek toy…

So there’s an article in the Guardian today about UpMyStreet. The article is called Street Plight and aims to understand why the company is in administration. Now generally, it’s a pretty flattering article – and a fairly accurate one – but there are odds and ends that are a bit annoying. Nonetheless I’ve decided that I’m going to look on the sunny side and concentrate on phrases like “Upmystreet is full of brainy types” and “[UpMyStreet Conversations is] a bit like a pub”. Yes. I think I’d much rather concentrate on those than the the rather less flattering “Technical people become dazzled by their own wizardry” and “Frankly, you could have more scintillating conversation with a curtain”.

Sigh. It’s no good. It’s not working. So here goes. Here’s why Clint Witchell’ss comments on Conversations are unfair:

One – it’s unfair to take the conversations in any one particular area and claim they’re representative of the whole site. Like every other community, Conversations is only as interesting as the people who participate in it, but unlike any other community – every area gets a different degree of participation. Certain parts of the country are beginning to explore the uses of the site and get involved in serious debates. Other areas are using it to chat about local news and to find local tradespeople. Other areas aren’t using it at all. It’s early days. All I can say is that if you don’t like the conversations that are ongoing in your area at the moment but you can see the potential and value in a site that could help your neighbourhood engage with local issues – then don’t just sit there complaining and feeling superior – start a conversation and see what kind of responses you get!

Two – Conversations is a new product for UpMyStreet and it pushes the ways the site can be used into completely new areas. One of our aims was to try and develop the relationship between UpMyStreet and the people who use come to it – to make people more regular visitors and power users at that. I think we’ve had a certain amount of success with this kind of work, success that I think will grow as people get more used to the idea and start to use the site in different ways. It’s a process of development that aims to move people from simple information finding into treating the site as a bridge into their local neighbourhood. But we’re not all the way there yet. These things don’t necessarily happen overnight…

Three – just because you can’t see obvious commercial uses for the forums software doesn’t mean that there aren’t any or that we haven’t thought about it seriously! If we get the opportunity, you’ll see exactly what we’re talking about and all the commercial/charitable/political uses for the technology, but at the moment – unfortunately – we’re all a bit distracted trying to keep body and soul together! Bear with us! Have some faith!

Categories
Technology

Register Refutations…

A week or so ago I wrote a little post called Oh Self-Correcting Blogosphere. It was about an article at The Register in which Andrew Orlowski managed to mix a few half-facts with some general paranoia to assemble the spectre of a censorious and manipulative cabal of either webloggers or Google managers.

Orlowski’s gone off on another one this week – and this one’s considerably more ludicrous than the one before. This time – in the article Google washes Whiter – he’s protesting that his previous article has been hidden from people who search for the word “Googlewash” on the search engine:

“Google has made its own statement on the ‘Googlewash’: by making The Register story that coined the phrase disappear from its search results. Not all the search results, mark you, but a very specific one. When you search for the word “Googlewash” (as at 9pm Pacific Time last night) around a hundred results are returned by default. Our story, which is where the word was coined, isn’t among them. We found it, eventually, but it was very difficult.”

The stunning problem with his hypothesis (which was – if you remember – that his article has been censored by Google) is that if you click on the very first link offered then you are immediately directed straight to the article in question. All that’s happened is that – for some presumably totally obvious reason – Metafilter’s article about Googlewashing gets higher prominence. Whether that’s because Metafilter has a higher page-rank and gets linked to more often generally or whether it’s because people linked to this particular discussion with more apposite keywords (like ‘Googlewash’ for example)- well I don’t know. What I do know is that if Google were trying to hide Orlowski’s ‘revelations’, then they’ve made a ludicrously bad hash of it. And if he were looking for censorship, perhaps he should be looking comparatively, since anyone with half a brain can find his article more easily through Google than via altavista or overture or alltheweb.