In Tokyo, Seoul, and Hong Kong, residents get bidirectional, gigabit Internet for less than U.S. $40 a month. On the other side of the globe, Parisians have a similar deal, though their upload speed is only 200 megabits per second (and much of the rest of France isn’t so lucky).
Most of us in the U.S. would be happy with half that bandwidth — even as we accept paying twice as much as Internet subscribers in Asia and Europe. In Seattle, I pay Comcast nearly $67 per month for a 50Mbps (6.2 megabytes per second — MBps) connection.
So why is broadband such a bad deal in the U.S.? What gives?
The answer lies at the uneasy intersection of technology and politics, and the story begins in 1984, when Congress passed the Cable Communications Policy Act (more info). At the time, of course, personal computers had only recently been introduced and the public Internet didn’t yet exist. (The precursor to the Internet — ARPANET [more info] — was carrying messages between university and government researchers and had been doing so since 1969.)
In those days of dial-up connections, legislators probably had no inkling that most consumer broadband Internet service would eventually travel over cable-television lines. Their primary concern was bringing some order to the burgeoning cable-TV markets, ensuring both competitive pricing (via deregulation) and standards in programming. Government had some say in the matter because the private-sector companies’ cables had to traverse public property. But there was a long debate over which government agencies would implement the act: federal, state, or local?
Congress cedes cable access to local control
The Cable Communications Policy Act of 1984 gave municipalities primary authority to grant and renew franchise licenses for local cable operations.
Generally, communities have given cable companies access to public property in exchange for agreements about such things as programming and access to residences and businesses in specified areas.