Categories
Technology Television

A really rough proposal for an Apple Media Hub (Part Two)

This part is mostly about: How to get a device into people’s homes that opens up these markets and these possibilities and lets people do more stuff with their stuff. If you have not done so already, you should first go and read Part One.

So from now on, I’m going to talk a lot about what I think specifically Apple should do because I think they’re the best placed in the world to do this stuff right now. Their brand isn’t as geeky as Microsoft – and they’re have already demonstrated that people will buy entertainment technology from them (the iPod) even though they’ve traditionally been better known for their computers. Companies like Tivo – at least at the moment – are too narrowly focused on one form of digital media to really be able to mount that much of a challenge in this area (and I think they’re probably more than aware of that by the way that they’ve started to work more with Microsoft. Sony would also be a good condender in this area if they weren’t busy trying to develop new types of smart guns with which they can shoot themselves in the foot with startling new levels of efficiency.

One of the big questions that we’re going to come up against in thinking through the home media hub will be how do we get people to buy the devices we’re talking about. Not all new technologies gel with consumers immediately. One technique for getting new tech into people’s homes is to combine or hybridise it with existing equipment. Many people had their first DVD player as a pleasant side-effect of buying a Playstation or Xbox. The Xbox also allowed people to rip songs to its internal hard disk – functionality that I doubt many people would have gone out and directly bought at that stage. And Playstations with built-in PVRs have been mooted for a while now. Hybridisation seems to be an almost natural process when it comes to the handling of media – potentially because the web-like models and increasing mobility and granularity of media objects that we’re moving toward is turning out to be similar for video, web pages, songs, photographs… So hybridisation is likely to be an answer – but hybridisation with what?

So the obvious bits of existing technology that our future home media hub will need to be able to hook into with are:

  • The internet – for the purchasing of on-demand video and music, and to have a return path for data
  • TV – for video playback and rich visual interfaces for stuff
  • Audio equipment / Stereos etc – for the playing of downloaded or ripped music
  • Home computers etc, that may wish to manipulate or handle media artifactrs in some richer way

What hardware / functionality does this imply?

  • Obviously the first piece of implied functionality that any hub has is a large amount of digital storage and some way of navigating around the material stored upon it.
  • If we want this device to have mass appeal then it has to be something that you could sell – or rent as part of a service package – to people who don’t yet have internet access (or don’t even have a computer). One clear way to do this would be to disaggregate internet access from the computer, and instead place some form of cable or ADSL modem within the media hub itself. This would seem to me to be the most reasonable approach.
  • In order to distribute that internet access to other devices, the simplest solution would seem to be to fit the device with a wifi hub. This immediately creates the potential for a local area network of connected devices streaming and connected to one another without the complexities of setting up dedicated routers. In principle then small Airport-Express style local hubs could be distributed next to other devices in the home that do not yet have home local area network functionality built into them. This could also distribute the internet access throughout a home in the most effective way.
  • If there are going to be any cabled connections then it would make sense to have them be between the device and the equipment with which it is likely to communicate the most possible data. This may very well be a local computer or laptop for some people, but for most – with video files being so enormous, it makes most sense for the device to be attached to the home television.
  • And in order to be a useful device for storing or playing digital music, then – if again we assume that we’re marketing to people who may not have a computer already – then we have to consider putting some kind of CD-playing or ripping functionality into the device itself.
  • This has another advantage, because if we’re considering putting in a CD slot, then there seems little reason why the same slot should not be usable for DVDs. This would then immediately increase the value of the device for people – particularly given that we’re already assuming that it’s going to be connected to the television in some way.
  • And given that we’re talking about handling large files and connecting to a television – PVR functionality seems like a natural fit.

So far then, we have a box (and not yet a desperately inexpensive one) that should be connected to a television, has a large hard-disk within it that can contain video and audio and has a slot for inserting DVDs or CDs. This is essentially then, a cut-down monitor-less iMac – for those of you who have been paying attention. Clearly you’d want to to create non-OSX-style interfaces since people are likely to want to use the device via a remote control of some kind, but if such a device has been mooted for around $500 (by Mac rumour-mongers), then it’s not inconceivable that you could bring something to market that people would be interested in buying.

In fact, if you look at devices that are already on the market, there are some that are not a million miles away from this model already. Two particular devices are already available that attach to a computer, have large hard disks, have some wired connections via the telephone or ADSL – and one of them already allows the ripping of music. On the one hand we have the dedicated PVR and on the other the dedicated gaming platform.

In order then to get this new device into people’s homes you could either:

  1. Further develop the entertainment features of gaming consoles, making them into full media hubs for the home.
  2. Further develop the connectivity and cross-media functionality of the PVR to a similar effect.

Marketing our partially mooted speculative device as an extension of a gaming console has some clear advantages – there’s already a decent amount of money to be made from gaming, the people who use them are keen consumers of new technology and much of the functionality is already in place. If I were Microsoft of Sony I would be moving in these directions. But I think I’d also be looking for a model that would appeal to a wider market than gamers. For many the gaming elements of a hybrid device could be a turn-off.

PVRs are similarly strong in some areas, but they haven’t yet set the world on fire. The consensus opinion appears to be that the main problem is that it is hard to articulate to people what a PVR does. Once people have bought one, they generally find them a delight to use. In which case a company like Apple that has little chance of being able to bring a successful gaming console to market is still in with a chance. It’s not necessarily terribly hard to evolve the concept of the PVR into a cross-media device, for watching and recording TV, watching DVDs and streaming music around a home. And once you’ve turned it into a media manager, then all Apple would have to do is make reference to their other successful piece of navigational technology:

DVD Player, iTunes at Home, PVR – it’s the iPod for everything else

At this point you should have a bit of a sense of the direction I’d be going with this stuff, so I’ll move on to putting up some illustrations of the concept and trying to articulate precisely how I think it should work…

Categories
Technology Television

A really rough proposal for an Apple Media Hub (Part One)

This part is mostly about: The drive towards digital media in the home, and changes in the media creation and distribution ecology:

So I’m going to write this quickly because it’s been stuck in my head for months, and even when I’ve dedicated large chunks of my weekend to trying to get it down, it hasn’t gone terribly well. So my assumption is – find a place to start. Run at it like a mad thing. Don’t worry too much if it’s really dense or confusing, or childishly stupid in places or really badly written or wanders off the mark. Just get it out in the open with all the pretty pictures and stuff you’ve been thinking about before you go insane.

The starting point is an assumption. It’s that we’re looking for better connection between our entertainment devices and our computers, and that we’re increasingly looking towards fully digitally-distributed entertainment media in all parts of our lives. Fundamentally I’m looking at what kind of future home-entertainment-based, digital media playing and manipulating Digital Hub should we be aspiring towards?

The other assumption is that there are companies out there who are interested in working in this space. I think this is demonstrable by looking at the people already in the environment – companies like Sky, Tivo, Apple, Microsoft etc. Again – not a huge assumption to be making.

One of the reasons that companies are interested in this space is because we’re finally reaching the point where home entertainment electronics are converging with computer technologies and the internet. A whole generation of web users are circumventing traditional media distribution channels to get hold of their television programming, films and music. The people who make these things are now looking to go back on the initiative and bring greater functionality – functionality under their control – back to the people.

In the meantime, the companies that make technology and software have clearly realised that there’s money to be made in meeting these needs and so are developing technologies at a fair rate of knots. One way to think about this is to think about the number of desktop and laptop computers in the world and then to think about the number of televisions, VCRs and standalone DVD players in the world. Now imagine that you’ve cornered the market for the operating systems for all these devices. For Microsoft – that’s got to be an insanely attractive proposition. You add to that some attempts to help mediate between content producers and people who watch and view content – DRM, for example – and you start to look indispensible to the ecology. And that’s kind of a license to print money – particularly if (like Apple) you’re also trying to make your service the definitive place to buy the media products themselves…

You could take this still further. Traditional broadcasters have had it pretty easy. They had regulated but basically pretty neutral carriers that they paid cash money to. Then they got to broadcast a set of TV and radio programmes when made most sense to them, and marketed against the relatively limited number of other networks who constituted their competitors. The explosion of TV stations made that situation a bit more tenuous, but that’s nothing compared to the enormous changes that are coming up. Given that there’s value in the long tail, and that programmes are increasingly time-shifted and watched at the audience’s convenience, we can predict with some increasing accuracy that we are approaching a time of addressable and permanently online programming, downloaded or streams or distributed on-demand.

Basically this is broadcast media becoming more like the web. And when you have a web-like ecology of programming out there, then you need mechanisms for finding programming. The mediators in this environment have tremendous power – they can build collaborative filtering mechanisms or page-rank mechanisms or whatever to move you from one type of media product to another. They can sell advertising on their services to direct you to one type of media rather than another. On the web these mediators are in competition with one another – search engine versus search engine, e-commerce venture versus e-commerce venture. But if these mediation mechanisms are built into the very hardware you’re using, then you essentially have some form of lock-in. This is why – in my opinion – it’s really really good for Sky that they control the EPGs on their platform and the PVR functionality, but less good for the BBC.

In a nutshell, early adopter consumers are working around the current short-falls in entertainment technology, and that suggests that there’s a market for new and better media-handling devices. And there’s a hell of a lot of money and power to be made in this environment too – both in terms of directly making cash from licensing software and selling appliances, but also in terms of controlling and influencing the interface between consumers and the media they might want to consume.

Now read Part Two: How to get a device into people’s homes that opens up these markets and these possibilities and lets people do more stuff with their stuff.

Categories
Radio & Music Technology

The New Musical Functionality: Portability and access

The other day I started this run of posts on the New Musical Functionality by arguing that the behaviour of an until-recently small group of digital music fans seemed to be now spreading into the mainstream. I also listed four areas that seemed to me to be where the most significant changes in consumption patterns were occurring – areas to which I believe that anyone building sites, services or hardware around music should be paying close attention. These four areas were (1) portability and access, (2) navigation, (3) self-presentation/social uses and (4) data use and privacy. Today I’m going to concentrate briefly on the trends towards portability and access.

This may seem like an obvious place to start, but I think it’s an important thing to get out in the open: the core difference between an iPod and a CD Walkman isn’t audio quality. That’s not to say that there isn’t a differences in the audio quality between the MP3/AAC file and CD ‘originals’ because – of course – there is and it is a significant one. However, in defiance of the normal path of technological achievements, the newer technology does not have the advantage in reproductive fidelity. In the future this may change (Apple’s lossless compression and increasingly cheap storage space are just two of the reasons why), but at the moment MP3s and AACs use lossy forms of compression and for this reason simply do not sound as good as their CD originals. It would probably be pushing it to say that this is the first significant change of popular audio format that actually made the sound quality worse (vinyl fans have been criticising the CD for that for years), but it does at least seem to be one of the first where claims of improved sound haven’t been a major selling point.

So why are these new formats and players starting to occupy the mainstream so effectively? What is it that means people want iPods so desperately even though they’re effectively purchasing a technology that will result in a decrease in audio quality? Again the answer is so obvious that it hardly bears repeating – particularly given that it’s on every single bloody advert that Apple produce. The reason that people are buying iPods is because they want 10,000 songs in their pockets. They want access to music wherever they are in the world. More still – they want access to all their music everywhere. Every last bit. Every last place.

As I’ve said, this sounds obvious but it is important. It’s important because once we understand the need that a product is filling, we can attempt to find other/better ways of filling it. The iPod’s current success has demonstrated that the need exists – and how – but I would argue that in the longer term it is by no means obvious that the need would be best served by small portable hard discs embedded in MP3 players.

It doesn’t take a lot of foresight to see the scope for development in this area. In the short-term, the trend seems fairly clear – storage capacity looks set to increase and/or devices look set to get smaller. This has been the trend of almost all computing technology over the last few decades (cf. Moore’s Law for the near-parallel phenomenon happening in processor speed). Given these fundamental developments, there aren’t an enormous numbers of directions that these devices can go.

The first two options for future product directions around this stuff are (1) larger capacities and (2) smaller form factors. We have already seen movements in both of these directions (iPod Mini / 60Gb iPod coming). However, there’s only so far that either of these trends can develop.

Increased capacity ceases to be interesting at the point where there is more capacity than data to fill it – hence the problem with saying that newer iPods can hold 10,000 songs. There are very few people in the world who would be capable, let alone interested, in sourcing that much music. After listening to my music exclusively through a computer for the last two or three years, I’ve still only got 8,000 MP3s. And I’m hardly representative. If we’re talking about significant subsequent increases in capacity then there are some pretty clear limits in place. 10,000 songs is about a month of solid listening. 100,000 songs would be getting on for a year. 1,000,000 songs a lifetime. Somewhere between a month and lifetime, the marginal utility of another song being on your iPod reaches zero (even assuming that physics lets you get to that size in the first place).

Of course when we talk about capacity in terms of songs we’re kind of missing the point. From this point on, advances in capacity are more likely to allow us to listen to higher quality audio than they are to increase the number of songs that people want to listen to. A tenfold increase in portable storage would mean that a future iPod could carry the same number of songs as a current iPod except in Apple Lossless formats that have all the sound quality of a CD. A parallel increase in bandwidth speeds could mean that the last few decades of work on compression could become fundamentally redundant – much like the techniques that meant programmers had to write whole applications to run with 8k of RAM are now pretty much irrelevant. So this is clearly a direction things are likely to move over the next few years. But even this has its limits. Once you’ve escalated disc size ten times there’s nowhere to go in terms of audio quality – or at least, nowhere that will make the slightest difference to most individual consumers. So again any subsequent growth in capacity will have to be sold in terms of an increased number of songs that could be held – and as such the gradual diminishing marginal utility problem comes in again. Increased capacity, therefore, has only so much of a shelf life – can only go so far before it collapses under its own weight.

The other potential obvious future direction – as I’ve said above – is to make the appliances themselves smaller. Here again there are limits to utility. There would seem to be a size under which a device ceases to be practical – that size being directly related to the size of interface elements, screens and buttons, which in turn relate directly to the size of fingers and thumbs and the limits of human vision. Now again, you can merge this in as a direction with the increased capacities and find a bottomed-out form factor and gradually increase the capacity on it – and no doubt this is the main approach that people like Apple will take over the next few years. At least that is until physics steps in or human interest (in having unlistenable amounts of music) begins to wane – both of which are probably a way off, but remain definite limits to future development in these directions.

Of course, there are certain conditions where an appliance may usefully shrink below the size of its interface, and that’s when it shares that interface with a number of other pieces of technology. This is the approach that the mobile phone manufacturers have taken – as the phones became almost unmanageably small, people’s attention moved instead to enhancing functionality and adding in cameras, PDAs, web-browsers, comms equipment, bluetooth and the like. This had the effect of keeping the form factors at manageable sizes while still allowing competition and product development to occur. There’s absolutely no doubt that this kind of hybridisation will be / is already a core part of the development of portable digital music players. Much of this hybridisation results in useful connections and possible new products emerging from music devices that are permanently network-enabled.

All of this previous stuff has been relatively uncontroversial – it’s no more than the immediate development along a couple of pre-existing axes of the products we have in our stores today. The incorporation of network-enabled devices has the capacity to change things a lot though. This is where alternative models for fulfilling a design for universal access and portability are likely to start emerging more strongly. We currently seem to be moving towards a world with greater and greater connectivity and one in which some kind of flat-rate, always-on broad-ish band internet access is likely to be integrated into pretty much all portable devices. This opens up other possibilities for having access to all of your music wherever you might be – and without actually carrying any of the files around with you. We could be looking towards a near future in which all of your media (and perhaps applications and information) can be held ‘in the sky’ and streamed/downloaded down to whatever appliance you like as and when required. Where this repository would live (with an ISP, with your home server, on your TV’s set top-box, on Apple’s iTunes Music store) is not immediately clear. But it’s conceivable that – given enough bandwidth and centralisation – massively redundant models like we have at the moment where everyone has their own copy of a music file could be replaced completely by centralised music-on-demand services. Personally, I’m not much convinced that particular extreme is likely – people still seem to like to own music and still think of it as an object rather than as a service – but that’s not particularly relevant. The important aspect is simply that the same user need can be met in different ways.

So will we move towards larger portable hard discs or towards connected repositories explorable through massive bandwidth? Probably the direction that we take here will depend on nothing more elegant and interesting than financial cost. If enormous storage options were to become enormously cheap and small, then carrying a significant hard disc is likely to remain the preference of individual music fans. On the other hand, if bandwidth became cheap, then we’ll probably find ourselves in a more service-driven and centralised streaming-based world. The model that’s most likely to dominate is likely to lie somewhere in between the two – in hybridised technologies that use hard disks as local copies of stashes of music held in more centralised locations – using the network to syncas and when appropriate (see note) as well as a mediator for various forms of engagement, navigation and data-mining around and in-between individual listeners. But more around that stuff in the next part of this sprawling rant around the New Musical Functionality: On trends in navigation…. (Coming Soon)

Note: Syncing becomes very important in a world with innumerable devices and limited connectivity. On a slight tangent – there are innumerable hybrid models where increases in portable data collide with the ability to access data at a distance. At the desktop level you can imagine computers running off the wired internet creating the impression of your ‘home’ computer wherever you sit, and on the portable level with large local storage being kept up-to-date perpetually via slower trickle-fed syncing protocols.

Categories
Radio & Music Technology

The New Musical Functionality…

Over the last few months webloggia has been full of discussions about the new musical functionality that’s starting to emerge around the web. I wasn’t immune from this trend – I wrote about MediaUnbound (On MediaUnbound and Recommendations Engines) and linked to the (currently pretty awful) Music Recommendation System for iTunes. Dan Hill has also been talking around the subject, talking about first Socialising mp3-based music listening and then about whether whether recommendations scale. And those minxes over at 2lmc linked and commented upon the views of people who are suggesting better ways that iTunes could handle transitions between songs. And of course the new version of iTunes and the iTunes Music Store also now has the user-generated iMix feature – standard web-native functionality which allows people (and now people in the UK, France and Germany rather than just the US) to put mix tapes on the web where other people can rate and/or buy them. And that’s just the tip of the iceberg…

Then of course there are the staples of this new musical functionality – from the rapidly-becoming-indispensible audioscrobbler (which uses the flexibility and granularity of net-enabled MP3 playing devices to create charts, lists and recommendations) through to the self-generating radio stations like last.fm and launchcast. And then there’s all the little hook-in tools like iChatStatus (publish current listening to iChat’s presence display) and Kung-Tunes (publish current listening to the web) that have slowly becoming integrated into my life without my really noticing how they all hook together, communicate, branch off and build upon each other.

All this new funtionality is emerging at the same time (or at least starting to be adopted at the same time) because we’re beginning to see a world in which a decent number of early adopters are now starting to do a substantial portion of their listening on digital devices. Obviously the iPod has been the major success story here – the definitive product that has been encouraging people to do the necessary work to transfer their music into more easily manipulatable digital files. But the increasing prevelance of broadband and wireless connectivity is helping too – becauase it’s the connection of these appliances to the internet that has created the explosion in interoperable, interconnected devices, applications and people. Clearly, the number of people listening to music through these channels is still tiny compared to the entire music-consuming public. There may be many people using iPods, but there’s still an adoption path for moving all your listening into digital jukeboxes and being perpetually connected to the internet (ubiquitous, always-on, non-computer-centric internet in the home is a bit of an obsession of mine at the moment).

But this small proportion looks like it is set to grow. One of the first questions you have to ask yourself in any organic R&D role (which is I think how I’d characterise what I do) is am I a freak or am I an early adopter? You have to have some sense of how much your instincts and excitements are in tune with real people in the world because otherwise you cannot possibly evaluate how those people might respond to the products, concepts or propositions that you think are exciting. In this case, it’s becoming fairly clear that people who are listening to digital music and in connected ways are very definitely more like early adopters than they are freaks. They’re pointing in roughly the right direction. And there are now enough of them that it’s becoming more and more worth people’s time to build little tools or widgets or applications or paradigms or appliances or business models around them. Which in turn appears to be making the whole area still more attractive, creating a feedback loop that is pulling more and more people towards new ways of listening. I don’t want to sound too cheesy but I’m afraid I can’t help myself – it’s pretty clear that we’ve reached a critical mass and that new musical functionality is about to explode. The only question now is what will be there when the smoke clears?

Over the next few days I’m going to write about some of the core trends that I’m seeing in people’s use of digital music, attempting to extrapolate from some current behaviours that we’re all observing around us – concentrating on how people wish to interact and use their music. I’m not going to spend too much time on the way some people may wish to legislate against these desires or build around them – because I believe for the most part that any attempt to do so will inevitably fail. Competing models that more adequately fulfil those needs will rise to take over in their place. The model that meets the most needs (while having the least obvious incumberences) will probably win in the really long-term, even if the market, commercial advantages or monopolist practices deform it in the short to medium term.

I’ll be talking about four major areas that seem to me to be indicative of the unevenly-distributed musical functionality of the future – (1) portability and access, (2) navigation, (3) self-presentation and social uses of music and (4) data use and privacy. These trends within these areas are – I believe – representative of much larger trends across the consumption of all text-based, audio-based and video-based media and so it might be possible to draw conclusions beyond the consumption of music. I am however not planning to do so. And I make no claims that these areas of enquiry are absolute or canonical, or that there are no other areas that I should also be investigating. All I’ll argue is that these four areas are core to the movements that we’re currently seeing and that they are each likely to play themselves out in the product designs, interface designs and business models of the near future.

Of course what comes after that remains to be seen…

Tomorrow: The New Musical Functionality, Portability and access

Categories
Technology

A proposal for Wifi-hubs to be built into landlines…

Brief summary for people with too little time: slap an ADSL modem and wifi hub into every landline phone and allow them to network automatically with each other and you suddenly have a simple way to bathe an entire home in net-enabled connectivity without needing a computer. A more detailed investigation of this concept (with pictures) follows:

So I’ve been thinking a lot about ubiquitous home networks recently, and the ways in which various appliances might start hooking up to the internet and through the internet to other people – social hardware if you will – and the problem keeps coming back to how you introduce the network into the home in the first place. There needs to be a way of wrapping all the core parts of a home in a network without it being something that requires complex set-up and specialised hardware. It also seems to me that the key to true ubiquity is to detach the networking completely from a its current reliance on a computer. Your home network of the future should not require a perpetually-on computer in a cupboard. Your gran should be able to have the benefits of internet enabled appliances without having to figure out the configuration of modems and puzzle their way through a complex OS-based interface.

And if – as I assume – we’re talking about wrapping the home in a wireless network, then it also seems to me that we should be looking for a way to do all this without introducing lots more widgets and boxes and cables around the place. Ideally – we would also try and avoid having little appliances stuck into random power supplies around the house (unless of course we can take them in a different direction and use them as control nodes as well as bridges cf. Airport Express – but more on that kind of paradigm another time). Essentially, we need a model in which home, net-enabled networks are treated more like a utility than a technology – more like water or electricity provision than …

Okay – so now we’ve got the criteria in place, how should we go about making this wifi-enabled network space? Probably the place to start is at the bridge between the appliance (including potentially a computer) and the network. Since these appliance could be in pretty much every room, then the first thing we’re going to need is a series of wifi points littered around the premises. These ideally would cover the entire home, but if they couldn’t cover it completely they’d have to be in key areas like kitchens, studies, sitting rooms, bedrooms and the like. They would not be as useful initially in storage areas, hallways, lavatories, bathrooms or on stairs – although clearly it would be an advantage if the bled into those areas. These points need to be powered in some way and they’d presumably need to connect with one another as wifi bridges. One of these appliances has to be able to connect to the internet. More than likely they’ll do this via the telecommunications grid through a phone socket. And then there will have to be some kind of interface for setting up the connection and protecting it with some kind of password, encrypted and connectable to by some kind of industry standard protocol. This interface would not need to do anything else, but conceivably could do…

So here’s my contention. Given that it would seem to be a good thing to split the provision of wireless network access from computers, and given that we’ll still need an interface and given that we need a point in all the core rooms of a home and given that we need to connect this network to the telephone network in some way – isn’t the telephone itself the ideal appliance to be the heart of the home network? Unlike the television or the radio or the stereo, any place in a home where people are likely to spend a lot of time is likely to have a telephone point in or near it. They have small interfaces on them already – a numeric keypad for one and often a small LCD screen for recording input, and they’re already connected physically to the telephone network.

So here’s what I’m thinking – and forgive the slightly ugly 80s styling of the phone itself. I tried to do something beautiful and isometric but it came out looking really nasty. So we make do with gradient fills and basic Illustrator shapes…

So the ADSL modem and wifi antenna/bridge/hub are both included within the device. This means that in terms of buying a wifi network for your house, all you have to do is purchase the phone and plug it into a phone socket. By sticking an Ethernet port into the base of the phone you could immediately use it to connect to printers or any non-wifi enabled networkable device. If you bought a second phone, however, it would operate like a wifi bridge (there’s already considerable precedent for hubs also acting as bridges – with the Airport Extreme being the most recent example), extending the network around the home. If ADSL modems did not reduce significantly in cost, then perhaps you could remove that from the additional phone units, creating master and slave phones, each of which could be strung together to extend the network still further. If ADSL modems came down in price, however, it might be useful to build them into all the devices – allowing each phone unit to negotiate with the other phones as to when it should become the dominant provider of access to the internet (ie. if the connection broke down or if it became clear that one phone could provide more throughput because of the local quality of the line or intra-phone connectivity). Either way, you’d expect the network to self-organise purely by bringing a new phone home and plugging it into a socket. The blue-lines in the following image would be self-organising connections between phones based upon proximity and strength of signal:

So now we have a wifi network in the home, where all you’d need to do to extend the network is purchase a phone and plug it in. And we have a number of devices capable of connecting to the web. Except we’ve left out questions of user names / passwords / encryptions and the like. Since we’re talking of this service as a utility, then the most obvious way of handling it would seem to me to be to get your ADSL along with your telephony from the same operator. Since the operators already know the telephone number that the phone is plugged into (and will know this whenever you use a phone on that network) it seems most obvious to consider that telephone number to be your user name for connectivity and the name of the local network. This would mean that when the phone was initially connected it could attempt to connect immediately to the operator. At this stage the operator (or the phone) could generate a numeric key with which to access the network. All you’d have to do is plug the phone in and then ring up your operator. Since they already have security provisions in place to help identify a caller, they could easily determine that a user was legitimate and give out an initial code which said user could then use to login to the network.

In practice this would mean the entire process to set up the network was to plug in the phone, ring an activation number and get your code, hang up and type in the number. Any other phones you wanted to connect would just require you to plug them into the mains and type in the activation number. And then to login from any device all you’d have to do is connect to the network which was called your home phone number (Network Name: 020 7286 ####) using (again) the activation number. Piece of cake!

The process would have other possibilities too. By using a numeric key rather than an alphanumeric key you immediately open up the number of devices that can be easily set up to use the network. Numeric keypads are far more common than full text input devices and faster to use. It would take no time at all to connect your mobile phone, television, DVD player, Tivo, Radio, CD player, tape deck and computer to such a network. But that’s just the beginning. Radio Alarm clocks have keypads, Microwave ovens have key pads. In fact the only electrical things that I can see around me in my flat that don’t immediately present some kind of numeric interface are my lights, iPod, digital camera, kettle, X-box, toaster and oven – and four of those have an interface that would allow you to choose numerals in different ways.

So that’s the concept in a nutshell. I can see some problems with it with regard to the separation of telecommunications services and the necessary connections that you might need to make between hardware and service providers that might make the whole thing unfeasible. I’m also more than aware that there have been explorations about ways of connecting telephones and connectivity elsewhere – some of which no doubt overlaps, encompasses or surpasses my thoughts – and no doubt I’ve made a few errors through the piece as well, but nonetheless I thought it was an interesting enough idea to push out into the real world and to receive feedback around. And that’s what I’m after now – please feel free to leave any thoughts, fixes, suggestions or extensions below or write a post and trackback to this one, so any interested parties can follow the discussion (if there is any) more easily…

Categories
Technology

First impressions of Tiger…

A few first impressions of Apple’s upcoming Operating System: Tiger:

  • Spotlight – basically an Apple native verstion of Launchbar with a few self-evidently nice features (searching Mail). After using Launchbar for a while I can state without question that it feels like part of the OS after a while, so if the Apple version has the similar ability to run it without using the mouse, then I’ll be pretty happy. I doubt Objective Development will be, of course…
  • iChat AV – the new version allows you to conference call ten people and video chat with loads of people too. That’s pretty cool. I’m pretty bloody impressed by that. It’s going to be pretty awesome and totally what I’ve been hoping they’d do. It fits in really well with some other thinking I’ve been doing…
  • Safari RSS – sites with RSS feeds are marked when you go to them, which is cool, and then you can subscribe to them directly through the interface. I’m not convinced by their interaction design here – I suspect I’ll keep using NetNewsWire for the time being – but it’s certainly a positive step that can’t help but make RSS penetration…
  • Dashboard – basically this one is a fucking rip off of Konfabulator and I’m pretty pissed off about it. I’m not a particular fan of the paradigm, but I don’t think that’s pertinent – unlike RSS feeds and the Launchbar knock-off there doesn’t seem to me to be really any reason for building this into the OS, it’s not particularly interesting or powerful and it really does seem just like nicking someone else’s ideas and lobbing them into your Operating System because you can. Harsh, Steve. Harsh…
  • Automator – as far as I can tell this is a GUI for AppleScript, allowing people like me who get scared by even the simplest of scripting languages to automate tasks quickly and easily without ending up dribbling into a cup. My personal jury is out on how useful it’ll actually be on a day-to-day basis but that doesn’t mean that I’m not impressed. AppleScript for the rest of us?
  • VoiceOver – a spoken interface for the Mac. I’m not really qualified to comment on the technology, but certainly the aspiration is good and important and I don’t doubt it’ll serve Apple well in cracking governmental markets. My only quibble – I’m not so impressed by the little white man they made in Illustrator to put in the logo. He looks a bit lame and … bendy …
  • .Mac sync – a revised version of iSync with a simpler UI and apparently some developer hooks. It still pisses me off that you have to have a .Mac account to do any syncing across the internet. I can’t quite believe that function is worth the $60 a year, nor do I think it’s even vaguely conceivable that you couldn’t explain to someone how to set up a server to handle that stuff themselves… Seems like a slightly shitty attempt to drill money out of you in an entirely random way…
  • Better Unix – neat! I think!
  • XCode 2.0 – wish I understood it!
  • System Wide Metadata Indexing – this is seriously cool and API’d up the wazoo so hopefully it will start to lay the foundations of the Finder technologies of the future…

All in all the big news for the operating system is the integration of search and metadata technologies into the heart of the Operating System. The Safari RSS and iChat AV stuff is pretty cool too and everything else looks like tweaks, gimmicks or outright rip-offs. I wonder when it’s out?

Categories
Technology

Notes from NotCon: Hardware

Well, anyway, since I’m up I may as well finish off my coverage of Sunday’s NotCon. After the Geolocation panel (my notes), I joined the Hardware panel. Over the entire day I self-consciously avoided all the political panels because they just looked like they’d be incredibly frustrating, confrontational. Ironically I decided that I wouldn’t find the Blogging panel quite as annoying, but more on that later in the day… The Hardware panel comprised of talks by James Larrson, Steven Goodwin, Matt Westcott, George Wright and Anil Madhavapeddy and was a really mixed bag of the sublime and ridiculous.

There’s something uniquely nostalgic about British geeks – their fetishes for the computers of their youth (the BBC Micro and the Sinclair Spectrum in particular) seem to overwhelm their future-thinking impulses time and time again. I can’t say that I’m convinced that this is a good thing – it makes me wonder about how British geekhood views its own chances of creating new technologies that actually can push things forward. Maybe they feel it’s just not possible any more? Maybe they think no one will take them seriously…?

That’s not to say that Matt Westcott’s illustration of new hard and software trends on the Spectrum isn’t impressive or entertaining. He illustrates connecting the tiny computer to hard disks and compact flash, talks about the demo scene and the “only project on sourceforge for the Sinclair Spectrum”. He ends up with a streaming video version of the Chemical Brother’s Let Forever Be video (directed by Michel Gondry). All good fun – I just can’t help but feel that it’s a little bit of a waste of a talented man’s time.

James Larrson’s piece was similarly random – but here at least the whole thing was clearly a bit tongue in cheek, and his presentational skills were so good that someone should really give him a TV-series of short introductions to crackpot inventors. He’d be awesome. The project he was talking about was based around using a BBC Model B from 1982 to measure the changes in state of the mayonnaise, bread and prawn components of a Marks and Spencer prawn sandwich – and using that to tell the time. I’m not going to go into too much detail except to say that he’s managed to get the accuracy so good that now the clock only loses/gains up to four hours in any given day.

I didn’t get the name of the next guy – I think it was the Reverend Rat – but he was showing how you could radically extend the range of Bluetooth devices. Apparently by soldering it together with an antenna he’s extended the range from ten metres to the rather more satisfyingly non-personal 35 miles (and more). His main planned use for this particular piece of tech seemed to be to stand on top of Centrepoint jacking into passer-by’s phones. Or that could have been a joke. Funny chap. Cool though…

Then we got to the three talks that were actually about the way technology might evolve: Steven Goodwin’s piece was on hacking around with your house and TV to allow you to control things long-distance (including recording TV on demand and stream it back to your computer via – I think – e-mail), which wasn’t really particularly new in principle but nice to actually hear from someone who’s doing it throughout their home. [If you’re interested in this stuff, then the O’Reilly book Home Hacking Projects for Geeks could be a good read.]

Then George Wright talked about Interactive TV, why it wasn’t the web and why that’s a good thing (in his words). The language he used about the platform’s restrictions (no return path in many cases, exhaustive centralised testing on the platform required before it any product can be rolled out, no literature to support development, completely limited to broadcast companies etc) doesn’t fill me with hope for the future of iTV – particularly when compared to the possibilities of the future ever-present fat-piped non-broadcast-limited, massively flexible and responsive web – but he did make a good case for convergence not being the point. We’re still talking around this stuff behind the scenes and I’ll let you know if we come up with anything interesting.

And finally – and my particular favourite of the session – Anil Madhavapeddy talked about using camera phones as ubiquitous remote controls / mice. There were some lovely aspects to this – the ‘ooh / aah’ bit coming when he demoed applications with ‘robust visual tags’ that look a bit like the 2d bar codes that the camera phone could recognise and manipulate. So you’d come up to a some kind of public terminal, turn on the camera phone, arrange it so that you could see the control you wished to manipulate on the phone’s screen, and then press the equivalent of a mouse button – at which point the control on screen could be moved around just as if your camera phone was a mouse (via Bluetooth or Wifi, I assume). It sounds over-complex from this introduction, but some of the immediate benefits were clear – the same tags could be used as static encoders of commands in paper interfaces that you just printed out, there’s a built-in mechanism for manipulating money via a mobile phone that opens up lots of possibilities for exchanging or buying things, etc. etc. I’m going to be keeping an eye on this stuff, it was fascinating…

And that’s pretty much all I have to say about the Hardware panel at the moment. I have to head off to a thing at the RAB on the “21st Century Radio Listener” for work. I’ll talk about the next session on MP3s and Mash-Ups later in the day…

Categories
Technology

Notes from NotCon: Geocoding

So the first panel of the day is over and we’re not waiting in the over-crowded downstairs for the Matt Jones– hosted “Hardware” panel to get going.

My initial reactions to the Geocoding panel were extraordinarily positive – the first project that people talked about was called Biomapping and it was a fascinating concept. Basically the guy talked about using a galvanic skin response detector attached to a GPRS device to start plotting individual reactions to the environment around them. Totally fascinating. Then followed Nick West from Urban Tapestries (note based geo-annotation on mobile phones), Earle Martin from Open Guides (wiki-based open city guides) and a clump of people from Project Z, who seem to spend their lives creeping around in places where they shouldn’t, taking photographs and leaving Indymedia logos. Pretty cool stuff.

I tried to ask a question during the event, but unfortunately was shut down by Chris Lightfoot. For those who are interested, I wanted to know whether or not any of the geo-annotation systems (including but not limited to the Open Guide wikis) were building in any protection against spamming at the architectural level. So many useful and potentially valuable projects in the past have ended up with fundamental problems with spamming (including e-mail and weblogs – and now wikis), the last thing we need is to have a standard of annotating the earth and all things around us that is going to be overwhelmed with adverts for prostitutes, scams, drugs and vouchers for Starbucks and McDonalds.

I’ll try and post up the full SubEthaEdit notes from the first session later in the day. No promises though…

Categories
Technology

Notes from NotCon: Introduction

So it’s Sunday morning at 11.30am and I’m going to start taking notes of the goings-on at NotCon. Probably the best way to describe the feel of the place so far is that it’s somewhere between the sensation of being at ETCon and how I felt when I went to a Doctor Who convention in Norwich aged around fourteen. The organisation is refreshingly haphazard (the Geolocation panel is supposed to start in about four minutes time and they’re still assembling the stage and trying to work out how to project computer screens onto a paint encrusted theatrical backdrop), and the cost of entry (at around £4) is around a five hundredth of what it costs for an Englishman to get to its (perhaps slightly more distinguished) American analogues.

NotCon has two concurrent streams across two rooms – one is on the ground floor and is completely full as of this moment. The other is on the second floor, marked by a succession of hand-scrawled notes and is almost completely empty.

My current plans for the day (should anyone care) are:

  • 11.40 – 12.30pm Geolocation (2nd Floor)
  • 12.30 – 1.30 Hardware (Ground Floor)
  • 2.00 – 2.50 Brewster Kahle (GF) or Mashups and MP3s (2nd)
  • 3.00 – 4.00 Social Software (2nd)
  • 4.30 – 5.20 Blogging with a point (GF)
  • 5.30 – 6.20 Peer-to-peer review (2nd)

I’m going to try and keep some rough notes of what’s going on over the day as and when I can, but no promises (it looks like power sockets are going to be hard to come by today and my Powerbook is a hungry beast).

Categories
Technology

What is the future of typing in public?

ETCon is a conference like no other. This is not because of the quality of the speakers but because of the type of audience it gets and the culture that has self-generated around it. One of the most notable features of the ETCon culture is in the near-permanent and overt use of the laptop during sessions. It is not an exaggeration to say that half the people in the auditoria will have a computer open during a keynote. It’s not an exaggeration to say that a significant proportion of those people will be multi-tasking enormously – finding a massive variety of ways of interacting with each other around the main topic of discussion.

There will be an IRC channel – co-occupied by (1) the kind of attendees who can’t work at home without having fifty windows open on their computer, the TV on with the sound off and loud trance music pounding into their frontal lobes and (2) those poor unfortunate long-distance virtual hecklers who couldn’t get out of work or couldn’t afford to participate in person who spend half their time trying to work out what’s going on and the other half of their time trying to get someone to ask questions on their behalf.

There will be the SubEthaEdit gang (a group I fear I belong to), whose mission will be to attempt to get the clearest transcription of the event in question and who may or may not require the discipline of writing to help them keep everything in their heads. There are a variety of sub-types of SubEthaEditors, including the blind transcribers, the commenters and the newbies. This year I fell into the role of blind transcriber, by dint of being able to type faster than most people. I hoped that other people would amend the notes around the place, and fix any errors I created, but – on the whole – SubEthaEdit this year for me became more of a broadcast experience.

Then there are the people who are surfing the net, or posting direct to their weblogs, or throwing files between each other over iChat or AIM or who are playing with the subject of the talk in question (cf. Ludicorp’s piece on Flickr, are actually trying to finish off their own papers or (as I often think might be the case with Cory Doctorow) paying their bills, organising their next speaking gig and knocking out a draft of their latest novel.

All in all then, the experience of ETCon is of a place in which a hell of a lot of people do a hell of a lot of typing.

At ETCon this year, Cory Doctorow did a piece on e-books that I’ve talked about before. His argument is that e-books can’t compete with paper at what paper does best. The DRM’d versions of novels that only allow you to read in a linear fashion – well these aspire to be ‘proper’ books, but they can’t hope to reach that level because of the absence of viscera. E-books simply aren’t attractive, engaging, smelly, textural or beautiful objects. This kind of e-book may be portable, but you still can’t take it into the bath with you.

But why should e-books be operating only at the level of what paper does best? Why shouldn’t they concentrate more on what they can add to the experience. If you give out a plain text version of your novel, then so much more becomes possible that wasn’t before – grepping / cutting / selective printing / copy & pasting / running simple scripts against / reading in any platform in any place and at any time / distributing and redistributing. If viewed in this perspective, then the gestalt of the paper book and the e-book is enormously potent. And if you take away the e-book, then the paper book might seem – well, broken.

At ETCon, that’s how those of us who are continually backchannelling think the experience of the conference for those without backchannel wifi-enabled social access to the concurrently written-into-existence e-conference must be. Those people who don’t engage in the larger conference are having a truncated experience of the event. It’s as if they’d decided to walk into a paper with a blindfold on.

I say all of this because I’m aware how odd it can sound. Since my return to the UK I’ve been to two events – one was ConCon, and there simply weren’t enough power-points to allow people to be engaged in any signicant degree of back-channelling. But then the papers were summaries, they were truncations, densely-packed contextualisers that served little purpose other than to inspire questions. ConCon was of a scale where the size and social dynamics of the group meant that back-channelling was simply less necessary. And even here typing went on here and there, unremarked upon, normal.

The other event I’ve attended was the AIGA UK event at the Design Council where representatives of the BBC spoke. And there a very different dynamic was in place. I was pretty much the only person in the room with an open laptop – trying to take very sparse and occasional notes (given the paucity of power-supplies) – and it became very clear to me very quickly that in a room of roughly 100/150 people, the muffled noise of my very occasional typing was considered to be rude and intrusive. The assumption was that I was doing stuff that was not related to the event concerned, that I was demonstrably not engaging with what was going on and that the open laptop was a direct affront to the spirit of the event. And in the meantime, I wanted to follow up some of the points online, I wanted to explore the issues more fully, I found myself passing my laptop to a neighbour so that he could see what I was thinking about. Much like a book without an e-book, the event seemed a little broken without a backchannel, without wifi. And I seemed to be the only one who noticed.

A couple of years ago I wouldn’t have been surprised by this attitude, but after two ETCons it seems vaguely archaic – particularly when surrounded by an apparent fraternity of highly web-literate Londoners. But it’s not limited to London – Stewart reports going to Infest in Vancouver and discovering an environment in which large numbers of geeks go to a conference and feel absolutely no need to backchannel, no need to have their laptops open, no need to note-take or collaborate or discuss in parallel.

So I wonder to myself which way are we moving. Are we moving more towards a ubiquitous computing presence where laptop note-taking at events and back-channelling are more common than now, where it breaks out of the individual contexts of ETCon and spreads more widely into other geek conferences, discussion-based events or even into work or conversational meetings. Or is this kind of overt back-channelling going to remain the provenance of a very particular clump of conference cultures – perhaps only percolating elsewhere in a more backgrounded, perpetual but less overtly lean-forward kind of way.

In essence what I’m asking is: What is the future of typing in public?