Categories
Personal Publishing Technology

On Robert Scoble and the BBC…

Let me be clear – I’ve met Robert Scoble and he’s a decent man, and I think the impact of his weblog on the public perception of Microsoft has been significant, surprising and actually pretty important. But this front-page of the BBC News Technology section is simply ludicrous. It’s absurd. I’m fairly sure that Robert knows it and would be embarrassed by the way it’s being represented, but really the people who ought to be more embarrassed are BBC News, who do a hell of a lot of good in the world, but have really plumbed a new low here:

I somehow doubt that Bill Gates is going to be bleeding internally at this news, and suggesting he would completely distorts the story. Readers are supposed to be able to trust their media sources to help them determine what’s really important in the world. Or at least that’s the BBC’s job, surely? Very disappointing.

More generally, good luck to Robert and I hope the new job is as interesting and rewarding as it seems his last one was. Couldn’t happen to a nicer chap.

Categories
Advertising Net Culture Personal Publishing Technology

What has been killing my server?

Today I was at work when Barbelith went down. MySQL errors everywhere, the community in uproar, IMs and e-mails. And it wasn’t like I didn’t have enough to do. So I explore in more depth. First step, see what’s actually happening on the server – so I launch Terminal, ssh in to the Barbelith Superserver over at Pair, find the directory with my logs in and type in tail -f access-log. Immediately, I see each request coming into the server in roughly real-time, scrolling down the page like I’m looking at The Matrix. Unix is not my strong-point, so thanks to Simon for that little trick. It’s moving too fast for me visually get a grasp on what’s going on, but I start seeing some recurrent patterns after a minute or so – HTTrack, which I do a quick search for and turns out to be a piece of software that you run on your computer to download complete versions of someone’s website. Given that Barbelith contains nearly six hundred thousand posts across twenty five thousand threads (each paginating ever forty or so posts), this is not a small job. And given that the software is dragging down a bunch of pages each and every second, it’s not really a surprise that the MySQL server was having some trouble.

So I banned the user’s IP for a bit by adding a couple of lines to my .htaccess file and waited for the site to start working again. But no luck. Exploring the database through the PHPMyAdmin interface that Cal set up for me, I note that all the activity has resulted in one table in the database getting corrupted. So I dig around online a little longer, and work out how to login to MySQL directly through the Terminal and run a repair table command and hope for the best. It all seems to work. Everything’s back to normal. Cheers all around. I’m very proud of myself.

Except then half an hour later the site is down again. This time it’s so bad that people can’t even connect to my server at all. Every site that I run off the server is completely inaccessible to the outside world. plasticbag.org and Barbelith stop working obviously, but also other little-known ventures like Everything in Moderation and bought-for-fun-after-seeing-a-Penny Arcade strip-and-maybe-taking-the-joke-a-little-to-excess Cockthirsty.com are out of action. I can’t even ssh in to my server any more. I can’t send urgent support e-mails to my hosts, or receive replies to them. I am, to all intents and purposes, dead in the water.

I ring them up – half a world away – to find out what’s going on. They’re initially mystified – MySQL is running so hot it’s a wonder that the rack-mounts aren’t melting. When they try and login, the server basically falls over completely. A forced restart, and I hold my breath a little. When it comes back, they dig into the logs and it becomes immediately obvious to them what’s going on. Hundreds – thousands – of requests every minute for a file called mt-comments.cgi – the part of Movable Type that deals with incoming comments to my weblog. My entire site has been quite directly, and clearly spammed to death.

So I’ve had to make a short-term choice while I explore my options in more depth, between a site with no comments and no site at all – and I’m afraid the answer is no more comments, at least for the time being. I’d been thinking of looking into Akismet, but there’s simply no point. That still means that MySQL is going to be dealing with all this crap-peddling evil purpetrated by money-grubbing parasites, and that means regular meltdowns. I’ve come to wonder whether the problems I’ve had with MySQL errors on Barbelith over the last couple of years have been more to do with comment spam than anything else, and – while I want to make it clear that in no way do I blame Six Apart or Movable Type or anything and while I’m sure there’s a way out of this situation – it has started to feel like having the mt-comments.cgi script sitting on my server is like having a bullseye painted on my chest. In the meantime, any advice people have on how to deal with this kind of activity would be very much appreciated indeed. Would moving to Typekey authentication only help? Should I be looking into throttling on the server? Can anyone help? The e-mail address (I’m afraid) is tom at the name of this site – or you can write your own post and link to this one and I’ll find you via Technorati.

Categories
Business Conference Notes Technology

How American are Startups?

The second day of and the first day of the conference proper (yesterday being tutorials) starts with a keynote from Paul Graham (see his Wikipedia entry) talking about whether or not the success of Silicon Valley might be replicated elsewhere – more specifically How American are Startups?. Suw Charman’s done a pretty solid near-perfect transcript of his talk and Graham’s subsequently written up the piece in two parts (How to be Silicon Valley and Why Start-ups Condense in America) but fundamentally his argument breaks down (to me at least) to these points:

  • Silicon Valley is about an accumulation of people, not geography – get the right 10,000 people and you could recreate it
  • To create an environment which is conducive to start-ups you need two groups of people – rich people who are prepared to invest and lots of nerds
  • Government is not a good replacement for rich people / angel investors as they’re slow, invest inappropriately and don’t have the contacts or experience to support the right activity
  • For rich people and nerds to mix you need a location where lots of rich people who care about technology and lots of nerds want to be – New York has lots of rich people but no nerds, other places lots of nerds but no rich people
  • Places that attract nerds and rich people tend to be cosmopolitan, liberal, happy places like San Francisco where people walk around looking happy and with high levels of students going to high-class universities
  • Other features of good places potentially conducive to this kind of activity are: personality, good transport hubs and connections to the existing Silicon Valley, quietness, good weather, not about excitement

Anyway, it would probably be fair to say that the reaction to the session has been mixed, although it’s more to do with his thoughts about American success and European problems than the points above (I suspect). Here’s one particularly astringent comment from Jeremy Keith:

It’s essentially a Thatcherite screed about why businesses should be able to get away with doing anything they want and treat employees like slaves … He also thinks that it won’t be long before Europe is all speaking one language namely, his … What. A. Wanker.

I think Jeremy’s gone a bit over the top, but I can completely understand why he reacted the way that he did. Paul’s piece felt extraordinarily American, in a semi-utopian libertarian free-market kind of way, and I have to admit it felt alien and strange and fairly abrasive. But there was also some pretty solid insights and a hell of a well-presented argument. By the end of the piece I was wondering, was this a political screed supported by good argument? Or was this a position that had been reached through experience that just happened to coincide with a particular political ideology.

I’ve spent about an hour thinking around this now, and have come to the conclusion that it’s probably the second of the two – an argument borne from experience but still an argument that needs to be heavily contextualised and derives from the particular environment that he operated within. The approach that Californians take to governance clearly works pretty well (for some interpretations of good), but that doesn’t make it a natural fact of the universe. It could be much more contingent than we tend to recognise.

Let me put it this way – one point that Graham made was about the role of government – basically intimating that regulation and government intervention was almost uniformly and universally a bad thing. But no government and no enlightened citizenry will be prepared to mutely accept the facts of their destiny on the basis of their weather (one of the aspects that Graham spells out as making a place attractive to the right kind of people). And all governments will try hard to make their environments more conducive to certain kinds of activity, including the US government. For example, one concern that I note that Paul Graham did not mention at all during his piece were simple start-up costs. He talked about companies started in garages, but probably didn’t realise that even garage space is pretty limited for large groups of people in metropolitan areas in Europe. This is a factor that probably has no effect in California outside the big cities, but is of massive importance across Europe. Property prices and costs are so extreme in parts of Britain that it’s almost immediately impractical for two or three people to try and start a little company. This is not in defiance of Graham’s talk, it’s simply ignored by it. And he ignores cultural differences, food costs and increased risks that mean that people are simply less comfortable making these kinds of decisions. If you want to create a culture where this kind of thing is possible, then these things need to be fixed. And that means work that needs to be funded and that means government one way or another.

And there’s another aspect which I found worrying – clearly it’s good for business to be able to hire and fire as you choose. And it’s also clearly not problematic for technology workers to lose their jobs if they’re working around Silicon Valley – it’s not like there’s a shortage of other places to work for. But the laws don’t only apply to the people with lots of job mobility and freedom – they also apply to people at the bottom of the food chain. Many European countries have decided to try and protect those people at the cost of some of their business flexibility. I’m not saying one option is more right or more wrong – I’m actually quite keen on the free market, and my time at the BBC rammed home to me some of the problems of working in an organisation that’s unionised to the point that it’s unable to fire people or restructure itself effectively in response to changing circumstances. But I think it’s important to at least recognise that the things that may make a Silicon Valley possible might also be partially founded on immigrant labour, crippled unions and a lack of support for people at the bottom of the pile. What’s good for Startups may not be good for all, and occasionally I got the impression that much of Graham’s stuff was describing the environmental factors that make Start-ups work as a uniform and perfect good in the world. I don’t buy that so much.

Having said all that, I have to be honest, I pretty much agree with all of his major points, and his thoughts on the right places for start-up activity got me thinking about places in the UK that would be good seed beds for an ecosystem of small and larger companies to operate together effectively. I’m not convinced that London is a good place for this kind of stuff at all, even though unfortunately all the money and all the business ends up there.

I’ve done some exploring around and found information on the top ten Computer Science departments in the UK and they are: Imperial, York, Oxford, UCL, King’s College London, Edinburgh, St Andrews, Cambridge, Glasgow and Bristol. Applying Graham’s criteria to those places, I’m afraid Scotland is probably mostly out as an ideal transportation hub and a home for rich technologists. I suspect London is simply too expensive and cripplingly scary for anyone other than people wanting to work for big businesses and media companies (it’s the New York of the UK, not the San Francisco). Which leaves Oxford, Cambridge, York and Bristol. I don’t know much about York, but my sense is that it’s a bit too far off the transport grid to be ideal, even though it has a large student population and is a relaxed and outdoorsy place. Oxford and Cambridge are obvious candidates, but I’m actually most interested in Bristol (coincidentally where I went to University) which is an hour and a half from Paddington and the Heathrow Express, is surrounded by beautiful landscape and opportunities to explore and has a 20,000-strong student population across the University of Bristol and the University of the West of England. It’s also not unmanageably expensive.

The other place that interests me a lot is Brighton. I don’t know whether or not it has an enormous technology contingent, but I’m hearing a lot about start-ups based out of there. It’s an hour from Central London, is extremely cosmopolitan and seems to have a lot of the characteristics that a start-up culture would require. I’d be really interested to get people’s thoughts about where and how we could get a more technology-focused start-up scene going in the UK. So feel free to leave a comment.

Addendum: For those interested, he also summarised the advantages and disadvantages of the US in the start-up space, and I’ve cut back the advantages to these helpful headlines which should give you the gist of his argument:

  1. Allows immigration
  2. Isn’t a poor country
  3. Not a police state
  4. High quality universities
  5. You can fire people
  6. Attitudes that don’t associate ‘working’ with being employed
  7. Not anal about business regulations
  8. Huge domestic market
  9. High levels of funding
  10. People comfortable with career switching
Categories
Gaming Social Software Technology

Self-reflexive rulesets in online communities…

It’s Tuesday morning and I’ve been in Seattle since Sunday evening at this year’s Microsoft Social Computing Symposium and frankly, I’m completely braindead through jetlag. I’m barely hanging on to intellectual coherence by my fingernails. Sunday evening I got about three hours sleep in total, last night a roughly similar amount. The quality of the event has been pretty high so far though, and I’ve met some fascinating people but I’m really not firing on all cylinders. Ross Mayfield’s taken a few chunks of notes, and most interestingly I’ve met some people with a similar interest to me in reflexive political models in online communities, including one guy who wants to build something very similar to the place I want Barbelith to become -online in an MMORPG. I finally got around to post up some of my earliest ideas around this subject on the Barbelith wiki a year or two back under the Tripolitica heading, but basically it goes a bit like this:

Imagine a set of messageboards, each with their own clear identity and each with a functioning moderation system based around a pre-existing political structure – one Monarchic, one Parliamentary Democracy and one Distributed Anarchy. Each of these political structures has been generated from one abstracted ruleset, and each component of that ruleset can be – in principle – turned on or off at will by the community concerned. Moreover, the rules are self-reflexive – ie. the community can also create structures to govern how those rules are changed. In other words, members of those communities can choose to shift to a different political model, or can develop their own by incremental improvements of changes to sections of the ruleset to allow moderators or administrators or normal users to create the ‘laws’ that govern how they inter-relate.

This self-reflexive component would operate with a bill-like structure – ie. an individual would be able to propose a new rule or a change to an existing rule that then may or may not require some form of wider ratification before it becomes ‘real’ and starts empowering or constraining the citizenry of that board.

When a new user joins the community, s/he is presented the current political structure of each one and from that point chooses a board to be affiliated with. S/he is then part of the population of that community and can rise up through the ranks (if there are ranks) and participate in the functioning of that political community. This goes right down to the creation of different parts of that commnuity, how the various parts of the community inter-relate with one another and who can post what and when.

Each community will have its own strengths and weaknesses – some will no doubt go horribly politically wrong and have power seized by mad administrators, but hopefully others will find their own kind of political equilibrium after a while – and maybe that political equilibrium could be a good model that could be genericised and used as a more common and rigid platform for new online communities that aren’t interested in the emerging rule-set component. That is to say, maybe we can evole a better system for handling debate, discussion and power relationships in messageboards and other online community spaces and games. Of course, for that to happen, the ruleset has to be sufficiently politically abstractable that new arrangements could emerge that didn’t initially occur to us during the creation of the ruleset and the reflexive process has to be comprehendible to real users.

Some sample bills:

  • Anna proposes a bill:

    Junior members to not be able to create threads

  • Bill proposes a bill:

    Administrators to not be able to change user roles.

  • Charles proposes a bill:

    Junior members to be able to create posts. Action will require ten ratifications from Moderators, Administrators, Normal Members. One disagreement can veto.

  • David proposes a bill:

    Moderators to be able to edit abstracts. Action will require three ratifications from Moderators or Administrators. Three disagreements will veto.

  • Edgar proposes a bill:

    The User Responsible to be able to change their own display name. Action will require no ratifications.

  • Fiona proposes a bill:

    Normal Users to be able to Unblock Users. Action will require 60% assent from Normal Users polled over 24 hours.

  • Gavin proposes a bill:

    Normal Users to not be able to propose bills. Action will require a 51% decision of all users polled over a 6 hour period.

I’d be interested in anyone’s thoughts around this stuff.

Categories
Family Net Culture Technology Television

Is the pace of change really such a shock?

I’ve got Matt Biddulph staying with me and been hanging out with Paul Hammond a lot recently again and since they’re both ex-BBC colleagues, we’ve inevitably found ourselves talking a bit about what’s going on at the organisation at the moment. And it’s a busy time for them – Ashley Highfield and Mark Thompson have made a couple of interesting announcements that contain a fair amount of value nicely leavened with some typical organisational lunacy and clumsiness. But that’s not what I want to talk about.

What I want to talk about is this, which is a link that I’ve already posted to my del.icio.us feed earlier in the day and will turn up later on this site as part of my daily link dump. For those who don’t want to click on the link, here’s the picture:

Now this is a photo taken in the public reception area of BBC Television Centre, but I want to make it really clear from the outset that you shouldn’t be taking it literally or seriously – it’s a prop, a think piece, to help people in the organisation start think about the issues that are confronting them and start to come to terms with it. It has, however, stuck in my head all day. And here’s why…

The apparent shock revelation of the statement – the reason it’s supposed to get people nervous – is because it intimates that one day a new distribution mechanism might replace broadcast media. And while you’re reeling because of that insane revelation and the incredible insight that it contains, let me supplement it with a nice dose of truism from Mark Thompson:

“There are two reasons why we need a new creative strategy. Audiences are changing. And technology is changing. In a way, everyone knows this of course. What’s surprising – shocking even – is the sheer pace of that change. In both cases it’s faster and more radical than anything we’ve seen before.”

So here’s the argument – that perhaps broadcast won’t last forever and that technology is changing faster than ever before. So fast, apparently, that it’s almost dazzlingly confusing for people.

I’m afraid I think this is certifiable bullshit. There’s nothing rapid about this transition at all. It’s been happening in the background for fifteen years. So let me rephrase it in ways that I understand. Shock revelation! A new set of technologies has started to displace older technologies and will continue to do so at a fairly slow rate over the next ten to thirty years!

I’m completely bored of this rhetoric of endless insane change at a ludicrous rate, and cannot actually believe that people are taking it seriously. We’ve had iPods and digital media players for what – five years now? We’ve had Tivo for a similar amount of time, computers that can play DVDs for longer, music and video held in digital form since the eighties, an internet that members of the public have been building and creating upon for almost fifteen years. TV only got colour forty odd years ago, but somehow we’re expected to think that it’s built up a tradition and way of operating that’s unable to deal with technological shifts that happen over decades!? This is too fast for TV!? That’s ridiculous! This isn’t traditional media versus a rebellious newcomer, this is a fairly reasonable and incremental technology change that anyone involved in it could have seen coming from miles away. And it’s not even like anyone expects television or radio to change enormously radically over the next couple of decades! I mean, we’re swtiching to digital broadcasting in the UK in a few years, which gives people a few more channels. Radio’s not going to be fully digital for decades. Broadcast is still going to be a dominant form of content distribution in ten and maybe twenty years time, it just won’t be the only one. And five years from now there will clearly be more bottom-up media, just as there are more weblogs now than five years ago, but I’d be surprised if it had really eradicated any major media outlets. These changes are happening, they’re definitely happening, but they’re happening at a reasonable, comprehendible pace. There are opportunities, of course, and you have to be fast to be the first mover, but you don’t die if you’re not the first mover – you only die if you don’t adapt.

My sense of these media organisations that use this argument of incredibly rapid technology change is that they’re screaming that they’re being pursued by a snail and yet they cannot get away! ‘The snail! The snail!’, they cry. ‘How can we possibly escape!?. The problem being that the snail’s been moving closer for the last twenty years one way or another and they just weren’t paying attention. Because if we’re honest, if you don’t want or need to be first and you don’t need to own the platform, it can’t be hard to see roughly where this environment is going. Media will be, must be, transportable in bits and delivered to TV screens and various other players. And there will be enormous archives available that need to be explorable and searchable. And people will create content online and distribute it between themselves and find new ways to express themselves. Changes in the mechanics of those distributions and explorations will happen all the time, but really the major shift is not such a surprise, surely? I mean, how can it be!? Most of it has been happening in an unevenly distributed way for years anyway. And it’s not like it’s enormously hard to see what you’ve got to do to prepare for this – find a way to digitise the content, get as much information as possible about the content, work out how to throw it around the world, look for business models and watch the bubble-up communities for ideas. That’s it. Come on, guys! There’s hard work to be done, but it’s not in observing the trends or trying to work out what to do, it’s in just getting on with the work of sorting out rights and data and digitisation and keeping in touch with ideas from the ground. This should be the minimum a media organisation should do, not some terrifying new world of fear!

I think this is the most important thing that these organisations need to recognise now – not that change is dramatic and scary and that they have to suddenly pull themselves together to confront a new threat, but that they’ve been simply ignoring the world around them for decades. We don’t need people standing up and panicking and shouting the bloody obvious. We need people to watch the industries that could have an impact upon them, take them seriously, don’t freak out and observe what’s moving in their direction and then just do the basic work to be ready for it. The only way that snails catch you up is if you’re too self-absorbed to see them coming.

Categories
Technology Television

A brief follow-up on TV distribution…

I wrote a post a few days ago called Quick observations on TV distribution in which I made a number of outrageous claims that I pretty much stand by. It was a bit of an off-the-cuff and not entirely digested attempt to throw out the core bits of the stuff that’s been in my head for a while, so I thought I should briefly mention that I’ve posted a couple of comments in the thread responding to some other people’s opinions and expanding briefly on a couple of the points:

(1) My assumption is that you pay these companies for a whole bunch of television you never even watch – that in terms of ‘must-see’ TV, people probably only really care about five – ten shows at any given time – and that most TV series arcs are between twelve and twenty five weeks, so that’s between a quarter and a half of a year. So even at today’s prices, you’d be paying what – $350 every six months – sixty dollars a month equivalent for ten new fresh shows downloaded every week of month. Now that’s clearly too much money, but it’s not too much by an enormous margin. Drop it down by a third and, you know, you’ve got yourself a deal – between eight to ten shows a week distributed down to my equipment for me to own and use immediately and for as long as I like for about $10 a week? That doesn’t seem so unreasonable.

(2) Think about it this way – the motivation for the content producers is not to give all the revenue to the content distributors, and they may not have to – you only have to see the straight-to-DVD market that Disney exploits to see that, and many shows recently (Futurama / Firefly) make more money on DVD than on TV distribution. There’s already a market (albeit relatively small) for people to buy programmes that have never been (or barely been) on TV. And there’s a huge market for buying media outright. So if it’s in their interest to try and get rid of the middle-man (or find a new one that’s more favourable to them), then they’re eventually going to start working in ways that make things difficult for the TV channels who obviously don’t want their audience balkanised. So they’ll either form partnerships with the content distributors for revenue sharing or they’ll gradually look towards different types of content that don’t suit download so well (Big Brother, perpetual rolling news, radio-style programming, live broadcasts).

(3) In terms of how you promote things if you just avoid broadcasting the shows themselves – well the same way you promote everything else that isn’t a TV show. They promote films without showing them on TV first, they promote albums without people hearing them first. You can buy ads on the TV that’s left, you can put things in the papers, etc. etc. My personal favourite – the US pilot season currently produces dozens of throwaway episode that never get shown, where instead every episode produced as a pilot is released to the public for free download (for the first month) and then if they get enough interest in the show in terms of direct subscriptions or individual pay-for downloads then they produce a full series. All TV shows are risks obviously, so this might move the burden of risk more onto the content producers than the networks, which might produce a more risk averse environment and a need for those companies to get in more revenue with which they can support the failures, but this is only a shift in money generation from the networks to the studios, and that often happens with middle-men anyway. And on the other hand, self-financed projects might get more access to the mainstream, fan favourites could be supported literally by the fans rather than by the advertisers. Componentised, smaller, more nible, more responsive media focused on meeting every niche need. It could work enormously well.

And I should also point out to the people whose post I can see on Technorati but not on their own site for some reason, that I’m not so much predicting that, “Internet TV will move from pay-per-episode to a pay-per-season, one-time subscription model” but that pay-per-season, one-time subscription is the best way to get down the programmes that you actually always want to watch, and that implementing the podcast-like functionality alongside individual downloads at a higher price is the best way to meet user needs and to make downloadable programming a real partner to traditional broadcast.

Categories
Photography Technology

Paul Hammond has reignited Favcol…

So a couple of years ago Matt Webb made a little site called Favcol that you could e-mail pictures to. Once you’d done so, it averaged out the colours, and blended it all together with all the other pictures in the system in an attempt to find the web’s favourite colour. It was pretty awesome, actually.

Anyway, it’s back! But this time there’s a twist. In fact there are a couple. First up – Paul Hammond has taken up the mantle from Mr Webb. Secondly he’s rewritten the code and started to push the concept in an entirely new direction. Now all you have to do is tag up your photos on Flickr with ‘favcol’ to have them appear on the site. This is, frankly, pretty neat and pretty easy. Interestingly when looking for an earlier post on favcol, I stumbled upon this old post from the plastic past: On Flickr, Favcol and my experience of weblogging in which I sort of propose something similar, which just goes to show that – LazyWeb be damned – building something well is way way way more important than thinking about shit.

PS. Wow, Lazyweb.org has been trackback spammed to hell and back.

Categories
Talks Technology

About two hours until I talk at ETech 2006…

In about two hours, I’ll be talking at ETech 2006 – presenting a slightly adapted version of my Native to a Web of Data talk. If any of you are at ETech and have seen the presentation before (or listened to it online) – please don’t come! I’m hoping to rehash most of the decent jokes, and you’d completely throw me off my stride. To be honest, I’m only really even writing this post because I want to have a self-referential slide in my presentation near the end indicating that I’m writing stuff live from the conference, so I’ll need a screen-cap.

The conference this year has been pretty good so far, all told – not least for setting up a number of opportunities to collide with my disparate groups of nerds – the British contingent (mostly ex-BBC who I can’t see enough of: Webb, Biddulph, Alice et al), the Yahoo! contingent (Chad, Karon, Simon, Leonard, Danah, Jeffrey and others) and all the wonderful extended networks that I don’t get to connect with so often – people like Ben and Cory and Ben and Derek and Veen. I could go on pretty much ad infinitum, but I can’t imagine it’s a fascinating read for you guys.

In terms of sessions, I’ve been to many. The Multi-touch interface high-order bit kind of rocked, Ray Ozzie’s session on cut and paste of microformats in the wild almost changed my mind about embedding data directly into pages (but not quite) and the session from the last.fm guys was solid enough – although it didn’t really sell the wonder of it to people around the room as much as I might have liked. I wasn’t enormously impressed by the Eric Bonabeau session – the only thing that really got me excited about it was the idea of creating a recipe space within which you could apply evolutionary principles to find varied and interesting ways of cooking limited amounts of ingredients onboard space missions. That kind of rocked, but was a twenty second throwaway at the end of a talk that otherwise didn’t seem to me to say anything enormously new. It’s easy to get blasé about innovation at ETech though, so perhaps that’s unfair.

I didn’t learn an enormous amount from Peter Morville’s talk but that’s probably because I’ve read most of his recent major work. The Microformats session was also pretty solid, but I didn’t get much new from it that I hadn’t gleaned from a systematic interrogation of their site. I’m still thinking about Clay’s talk on moderation strategies. It’s a noble goal, and one that I’ve been interested in for a while (I even proposed a talk to ETech around a similar area a few years ago), but I’m not convinced that it really got into the meat of the territory. That’s probably also unfair, given the shortness of his slot.

The three highlights for me so far have been Linda Stone on Attention (I saw her at Supernova last year, and this was a cut-down version of that talk, but still just as urgent and prescient), Webb and Cerveny on playsh which just seemed to be endlessly entertaining and inventive code-play, and Bruce Sterling’s piece on Spimes and design and innovation and language. I’m in the process of digesting his book at the moment, so that was all pretty rewarding.

I have to head off now to put the finishing touches to my slides, but I’ll try and write up some more of my thoughts over the next couple of days. These events are all about refilling the cup of creativity when you get tired and jaded by your immediate missions, and as usual ETech fills me up to and sometimes over the brim. Too much to think about, too little time. I’m looking forwad to the couple of weeks afterwards where I can finally digest everything.

Categories
Design Navigation Net Culture Social Software Talks Technology

My 'Future of Web Apps' slides…

Right then. My slides. I’ve been trying to work out the best way to put these up in public and it’s been more confusing than I thought it would be. Basically, the slides are so Keynote-dependent and full of transitions and weird fonts that it would translate very badly to Powerpoint – and with no one having the fonts, the presentation would look pretty terrible anyway. So I’ve decided to put it out there in two forms – both simple exports of a slightly adapted version. If you want the PDF it’s here: Native to a Web of Data (16Mb). If you’d rather view it online directly, then I’ve used the export-to-HTML feature (which I’m beginning to suspect might kind of suck a bit) to produce the likely-to-crash-your-browser-with-its-hugeness Native to a Web of Data.

The biggest question I’ve been asking myself is whether or not it’ll make any sense as a standalone presentation, and i’m afraid to say that the answer is sort of. Without my notes there are great chunks where I’m afraid you’ll have to make pretty substantial leaps to keep the thread of the thing, which is hardly ideal. What I should really be doing is writing the thing up in a more logical thorough and coherent way, but I’m not sure I’ve got the mental agility to do that at the moment. So enjoy it in as much as you are able and I’ll think about writing it up over the next few weeks.

As usual I have to preface all of this stuff with the normal disclaimers. The views presented in this presentation do not necessarily represent the views of my employers.

Categories
Design Navigation Social Software Technology

On Metafilter's folksonomic subdomains…

I’m going to move on quite quickly back onto something way way less embarrassing and mainstream back into the boring semi-beating heart of one of my pet work-related fetishes, the folksonomy. In particular I thought I’d talk about a new development over on Matt Haughey’s Metafilter, written up on Metatalk. Each post on Metafilter can be tagged folksonomically by its author when it’s created – so when I write a post on trees, I can add a few keywords like trees, plants and leaves to make it easier for other people to find them later. What Matt has added recently is a different way to get nice easy to browse sharded versions of Metafilter by making it possible for people to use tags as a sub-domain. So for example, now at tree.metafilter.com is a kind of ‘treefilter’. And at plants.metafilter.com is ‘plantfilter’. And so on…

My first reaction was extremely positive – I think it’s a great idea to help Metafilter serve more constituencies to provide what amount to multiple homepages. And I love the idea of using tags elegantly to create new ways to browse around and explore sites large content sites. In fact a few years ago I spend a fair amount of time hassling Matt to start regional metafilters for people of different cultures and backgrounds, arguing for a version of the site for the UK or London or a On Regional Metafilters and Matt Haughey wants me dead). This tag format makes that actually practical – there actually is a uk.metafilter.com now and a europe.metafilter.com. And it’s interestingly extensible in all kinds of neat directions.

But there’s something troubling about it for me, and I think it’s the idea that now a single thread on Metafilter can have a great variety of URLs. The current top thread on europe.metafilter.com is called Sieg Whaaat? and it’s URL is europe.metafilter.com/mefi/48225. But now, suddenly, it also has twenty other URLs including germany.metafilter.com/mefi/48225, berlin.metafilter.com/mefi/48225 and music.metafilter.com/mefi/48225.

Now I know that various search engines can compensate for content displayed the same in multiple places, but it’s got to affect Google rankings or any solid concept of one addressable web-page per resource on the internet. And even if it doesn’t affect the big players too much, it’s inevitably got to screw up all the other smaller services that use URLs to identify resources. How will Technorati or del.icio.us handle this stuff now? How will you be able to aggregate annotations or comments upon a thread in Metafilter in a coherent fashion without making it some kind of special case?

It’s such a shame really, because there’s a hell of a lot of potential here. Really what you want is some way to make these homepages as useful as they are without carrying the URL structure through into the individual thread pages. It seems clear why he hasn’t done this, of course – if you want to keep someone within a conceptual sub-site like gardening.metafilter.com then you have to change all the links contextually around the page to make it seem like a coherent site – on the destination pages as much as on the indices. And that means either some form of cookie-like approach that keeps track of how you found a link, or something in the URL. The former approach doesn’t work so well because it means that you can’t easily send someone a URL and be sure they’re seeing the same thing you saw. You might be recommending a great page on a site about gardening, only for them to see it as a generic and intimidating entry on Metafilter central. The latter approach creates URLs that either proliferate versions of the same page, or are full of query strings (which are somehow less definitive in their addressing of a page).

All in all then, I applaud the intent a lot but think the implementation is profoundly broken. Unfortunately I can’t think of a solution.

On a related note, though, the whole tagging thing is starting to get me really excited because it kind of makes whole database schemas quick to upgrade and you can add loads of fascinating functionality really quickly. Imagine, if you will, that any thread started in a sub-domained area includes (by default) the tag for that area. It doesn’t do this at the moment, but it could do so really easily and could start generating nice feedback loops.

Or take it in a completely different direction – get rid of tags from the subdomains and instead put in tags that represent languages. So you create a form of tags which operates as a key:value pair with a code something like lang:english or lang:francais and then present a default English homepage to Metafilter with links to english.metafilter.com and francais.metafilter.com on it. You then encourage people to post links in French on the latter one, and automatically tag each of their posts with lang:francais as you do so. This would create real meaning in the subdomains and would keep the URL space nice and tidy. To browse a sub tag then, you’d have a URL like http://english.metafilter.com/tag/iraq, with all the threads within that area given URLs like http://english.metafilter.com/mefi/34566.