ΒιΆΉΤΌΕΔ

Β« Previous | Main | Next Β»

The Net At Breaking Point? Pt 2

Post categories:

Ashley Highfield | 14:45 UK time, Wednesday, 28 November 2007

I was at chaired by RT Hon (minister for competitiveness), and attended by , Secretary of State for Business, Enterprise and Regulatory Reform.

All the players were there, including CEO of , Ben Verwaayen and his sidekick Ian Livingstone, CEO UK , board members from every , the heads of the regional development agencies, etc.

The main thrust of the debate was:

Firstly: "Is the internet currently at creaking point?"

The view from most players: "no".

Secondly: "Will it get to creaking point in the short term?"

The consensus: probably not. There is still much that can be done to the main backbone infrastructure to improve performance before the massive step change of fibre to the home is needed, a conclusion similar to the US study I mentioned in my previous post.

Thirdly: "Does the business model exist to invest the billions to upgrade the 'access infrastructure' (the last bit to the home) to fibre?"

Now, according to BT, almost certainly β€œno”.

This is a very different view to . It claims there are very strong drivers and business models to upgrade.

One driver is that the current infrastructure will become increasingly unusable and people will want to upgrade: "Overall, transmitting over a saturated broadband link will feel a lot like the bad old days of dial-up. Long pauses between request and response, with some applications just too painful to bother with."

The second driver is all the new services that ultra high-speed broadband will enable.

BT’s kind of thinking is that there is no pressing need to encourage build out of the next generation access ("NGA") broadband network. But this approach could have a profound impact on the ΒιΆΉΤΌΕΔ, on the growth of really high-speed broadband in the UK, and its universal availability.

There is a good reason that ISPs would like to shape traffic: they can. The technology is now there at the ISPs' exchanges (the IP-DSLAMs) to charge extra for supplying a particular user with a particular file (e.g. an episode of EastEnders) from a particular source (e.g. ΒιΆΉΤΌΕΔ iPlayer).

But to me it seems like the flames over ΒιΆΉΤΌΕΔ iPlayer's impact on the current internet infrastructure are dying down - much as journalists would like to fan them. For example, in an article in the Observer, it was reported that "[t]he ΒιΆΉΤΌΕΔ's iPlayer could be hit by an industry-wide move to charge companies and consumers according to the size of files downloaded online", while a Mail on Sunday story last month accused the ΒιΆΉΤΌΕΔ of "hitching a free ride".

But in to the Observer story, BT and Virgin were quick to distance themselves, BT's chief press officer saying: "BT is not complaining about or discussing the implications of iPlayer with the ΒιΆΉΤΌΕΔ".

ISPs seem to want to downplay the whole "internet at breaking point; content owners must pay” story.

But I may have been wrong about the motivation for their recent change of heart. I thought that most ISPs were firmly set against charging content owners to stop their pipes filling up. They were using the convincing logic that their subscribers would feel that the ISP has already charged once for this content to be delivered via the monthly broadband subscription fee.

But after yesterday’s meeting, I start to wonder if there was another motivation to claim the current internet was doing just fine. Because if it is doing fine, then the government and have less reason to intervene and regulate the build out of the NGA (fibre to the home) network.

This would allow BT to build out its "21st Century network" at its own speed, where it wants, carrying whatever services it deems, to whomever it pleases.

I am not in favour of regulation where it is not needed. We do not need Ofcom to transplant TV regulation onto the internet (see Mark Thompson’s Liverpool speech about "TV without frontiers"). But I don’t want to see the broadband lead we achieved in the UK (after a very slow first few years ended by the eventual and decisive intervention against BT [Local Loop Unbundling]) be lost because we let the same situation of very conservative build-out arise all over again.

I believe the benefits of a truly high-speed broadband Britain will be enormous (and this means NGA speeds of 100Mb/s, not the 10-20Mb/s that the current infrastructure can support).

We are a creative knowledge economy: the creativity will explode as the bandwidth increases. I know we at the ΒιΆΉΤΌΕΔ are offering services based on the available bandwidth. Open the floodgates, and the content and applications will appear, followed closely by consumer take-up

The US report cites: β€œ[a]mong the trends driving this, the authors refer to growing use of Web 2.0 applications, the replacement of physical travel by virtualisation of face-to-face contact within friends and families, users switching from broadcast to IP-accessed versions of radio and television (especially the latter), and video-sharing”.

I’d add to this list of NGA drivers:

  • exploitation of video archives
  • production collaboration (from 2D TV footage to 3D architectural models)
  • health monitoring (and remote procedures)
  • security (including video child-care oversight)
  • simultaneous in-home access to multiplayer gaming
  • services to enable small-to-medium businesses to play with the big boys including telepresence
  • reduction in carbon footprint and lower overheads from home working
  • video rich social networks, and so on and so on.

My own worries were stated simply by the secretary of state: "the digital divide will be the new definition of the haves and have nots”.

Ofcom is currently consulting on its response to NGA. So, what should we be telling them?

Ashley Highfield is Director, ΒιΆΉΤΌΕΔ Future Media & Technology. Part 1 of this post is here.

Comments

  1. At 09:15 PM on 29 Nov 2007, J Davies wrote:

    Who pays? The end user, one way or another. I think ISP's need to get reality into their pricing structures, this may be down to BT's whole sale rates or possibly OFCOM having a bit of far sightedness (I'm not too critical of OFCOM but it could act more quickly).

    I simply want to rent a connection that gives me a reliable connection speed and an agreed amount of data. Is that too much to ask?

  2. At 02:15 PM on 30 Nov 2007, wrote:

    Ofcom should be getting involved - make the '21st Century network' rollout happen and make it happen without delay.

    If they let BT sit on its hands then the rest of the world will be on "Web 4.0" before the majority in the UK start to see any change.

    It wasn't that long ago that "www" was "world wide wait".

    We need to maintain investment and push for progress. Without it then just like the railways and roads in the past the UK's "digital" highways will quickly become out-dated and poorly equipped to keep up with demand.

  3. At 05:03 AM on 01 Dec 2007, Dave wrote:

    I think investment in NGA should be pushed forward as much as possible, but I don't think that should be at the expense of net-neutrality.

    There's little to be gained from over regulation, but I'd hope sensible discussions could be had with ofcom to make sure that NGA plans are on track to deliver a fully-wired high-bandwith britain, in a sensible timeframe.

    It's in the interests of content providers and content consumers to prevent traffic shapping. Net access should be treated largely like a utility - I don't get charged a different rate for my gas and electricity depending whether it's used for my hop or central heating. The way we live today, net access is becoming a comparable essential utility.

    Hopefully OFCOM sees it as part of their remit to ensure ISPs do not let greed break some of the reasons the net has been a sucess - unregulated access from content providers to consumers, and the ability for anyone (within limits) to be a content provider.

  4. At 05:34 PM on 03 Dec 2007, Adrian Stannard wrote:

    I would like to know exactly what the need is for inreasing ISP bandwidth - a mantra that is often bandied about the web, which however seems to ignore that in mose cases the rate of information transfer is ultimately determined by server load, not network speed, and ultimately the bottleneck in servers comes down to that overlooked comparatively slow electro-mechanical device - the hard drive. Even the fastest 15,000 rpm drives are lucky if they can can transfer above 100MB/s of contitguous files - real web content is made of thousands of small files which will reduce transfer to below 10MB/s (remember that the access time creeps in for each individual file that needs to be accessed, and the transfer speeds typically trumpeted on hard drive manufacturers specifications refer to the transfer capabilities of the interface - e.g. 150MB/s SATA - this is almost completely irrelevent - it is not the real transfer speed of data of a hard drive). So what happens on a server cluster with about 100 users per server? Typically you get 10MB/s /100 = 100KB/s data transfer. Of course, some servers cache recent files in RAM, but that depends on configuration and type of content, and some hosts have more servers per number of users - but equally some hosts squeeze as many sites into a single server and don't give a damn about traffic. My guess matches what transfer rates I typically measure on a 4 MB/s cable broadband connection (which does indeed transfer at around 4 MB/s when I access a relatively idle server at a host I'm using). Clearly my ISP isn't at fault for slow content.

    And lo and behold if the page is script-based! How many SQL queries a site uses varies wildly, but many CMS derived dynamic pages are the main resource hog of a server, some taking up to a few seconds to compile pages on even a "free" server (from personal experience in hosting my own script-based sites)... network speeds are almost irrelevent in comparison!

    Lastly the ΒιΆΉΤΌΕΔ of all sites should not ignore the implications of server load - why? Because it is highly vulnerable to "Statistics Hacking", that undesirable activity where individuals with a certain agenda attempt to push stories to the top of the "most read" list by use of automated programs issuing multiple http requests/javascript connections for "email to friend" - which contributes to a good proportion of the ΒιΆΉΤΌΕΔ's server load. I suspect it is very difficult to obtain reliable figures here on the extent of this activity, because counting depends on the existence of a security script being able to spot the trait and prevent it in the first place!

    So lets get all the facts about information transfer before passing blame on one thing (like ISPs).

  5. At 07:23 PM on 07 Dec 2007, J Davies wrote:

    Adrian Stannard,

    your argument about server capacity and hard disk speed used to be true, but since the introduction of things like bittorrent which have a swarm structure mean that network bandwidth becomes the limiting factor.

  6. At 04:04 AM on 09 Apr 2008, poke50uk wrote:

    I read bout a similar news story within the main site just...
    it made me laugh...
    either way, when I brought the largest broadband packages availible, 20mb/s, I was activly encouraged by my ISP to download MP3s and watch programs online.. now they complain its too much. Glad I'm with cable and not phoneline! I know the limits of a good old fasioned copper wire compared to fibre-optic.. especally in a built up area

    I've turned notcurnal, so normal TV between 1am-7am gives me news, and thats about it- thus why ΒιΆΉΤΌΕΔiplayer is a god send to me!

This post is closed to new comments.

More from this blog...

ΒιΆΉΤΌΕΔ iD

ΒιΆΉΤΌΕΔ navigation

ΒιΆΉΤΌΕΔ Β© 2014 The ΒιΆΉΤΌΕΔ is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.