[Article] China Still Blocks Google.cn

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View
    (Google's China portal inaccessible in Shanghai, Beijing)

Not enough done yet, I guess. Toby Inkster posted this in alt.www.webmaster:

__/ [Toby Inkster] on Friday 03 February 2006 19:24 \__

Quoted text here. Click to load it

Re: [Article] China Still Blocks Google.cn

   [Albutt  Bore  claims  to  have  invented the  bimbo  nation  super
bottleneck.  That's  the problem.  It  is based on  a Star-Trek-Commie
anticommercial view of the world.  It is time to charge users by their
usage,  at   each  gateway.   The   free-love,  free-money,  free-AIDS
mentality  of the  canker  sore sandal  wearers  of the  1960s is  the
problem! When the  Red Chinese downed a USA plane  near Taiwan in 2001
they  retaliated in  part by  publishing  computer virus  kits on  the
web. However, the Chinese government strictly censors and controls the
internet going  into China. Why do  we not censor  the internet coming
OUT  of China?  It didn't  seem to  bother anyone  when  the slightest
information from  the Serbian side was  censored in the  1990s. In the
summer of 2003  a wave of spam paralysed the internet  and we have not
been able to use the internet in a normal way since then.]

   The  Internet  Is  Broken  By  David Talbot  [Cover  story  in  MIT
Technology Review Dec 2005]
   In his office  within the gleaming-stainless-steel and orange-brick
jumble  of MIT's Stata  Center, Internet  elder statesman  and onetime
chief protocol architect  David D. Clark prints out  an old PowerPoint
talk. Dated  July 1992,  it ranges over  technical issues  like domain
naming  and  scalability.  But  in  one slide,  Clark  points  to  the
Internet's dark side: its lack of built-in security.
   In  others, he  observes  that sometimes  the  worst disasters  are
caused not by sudden events  but by slow, incremental processes -- and
that  humans  are  good   at  ignoring  problems.  "Things  get  worse
slowly. People adjust," Clark  noted in his presentation. "The problem
is assigning the correct degree of fear to distant elephants."

   Today, Clark believes the elephants  are upon us. Yes, the Internet
has wrought wonders: e-commerce  has flourished, and e-mail has become
a ubiquitous means of communication. Almost one billion people now use
the Internet,  and critical industries like  banking increasingly rely
on it.

   At  the same  time, the  Internet's shortcomings  have  resulted in
plunging  security   and  a  decreased  ability   to  accommodate  new
technologies.  "We are at  an inflection  point, a  revolution point,"
Clark now argues. And  he delivers a strikingly pessimistic assessment
of where the  Internet will end up without  dramatic intervention. "We
might just be at the point where the utility of the Internet stalls --
and perhaps turns downward."

   Indeed, for the average user, the Internet these days all too often
resembles New  York's Times Square in  the 1980s. It  was exciting and
vibrant, but you made sure to keep your head down, lest you be offered
drugs,  robbed, or  harangued by  the  insane. Times  Square has  been
cleaned up, but  the Internet keeps getting worse,  both at the user's
level,  and -- in  the view  of Clark  and others  -- deep  within its

   Over the  years, as Internet applications  proliferated -- wireless
devices, peer-to-peer file-sharing, telephony -- companies and network
engineers  came up with  ingenious and  expedient patches,  plugs, and
workarounds. The  result is that the  originally simple communications
technology has become a complex  and convoluted affair. For all of the
Internet's wonders,  it is also  difficult to manage and  more fragile
with each passing day.

   That's why  Clark argues that  it's time to rethink  the Internet's
basic architecture, to  potentially start over with a  fresh design --
and  equally important,  with  a plausible  strategy  for proving  the
design's    viability,   so    that    it   stands    a   chance    of
implementation. "It's not as if there is some killer technology at the
protocol or  network level  that we somehow  failed to  include," says
Clark. "We need  to take all the technologies we  already know and fit
them together so  that we get a different overall  system. This is not
about  building a  technology innovation  that changes  the  world but
about architecture --  pulling the pieces together in  a different way
to achieve high-level objectives."

   Just such  an approach is now  gaining momentum, spurred  on by the
National  Science Foundation.  NSF  managers are  working  to forge  a
five-to-seven-year plan estimated to cost $200 million to $300 million
in research funding to  develop clean-slate architectures that provide
security, accommodate new technologies, and are easier to manage.

   They also  hope to  develop an infrastructure  that can be  used to
prove that the  new system is really better than  the current one. "If
we succeed in  what we are trying to do, this  is bigger than anything
we, as  a research community, have  done in computer  science so far,"
says  Guru  Parulkar,  an   NSF  program  manager  involved  with  the
effort.  "In  terms of  its  mission  and vision,  it  is  a very  big
deal. But  now we are just at  the beginning. It has  the potential to
change the game. It could take  it to the next level in realizing what
the  Internet could  be  that has  not  been possible  because of  the
challenges and problems."

   The Internet's  original protocols, forged in the  late 1960s, were
designed to do one thing very well: facilitate communication between a
few hundred  academic and government users.  The protocols efficiently
break  digital data  into simple  units  called packets  and send  the
packets  to their destinations  through a  series of  network routers.
Both  the routers  and PCs,  also  called nodes,  have unique  digital
addresses known as Internet Protocol or IP addresses. That's basically
it. The system assumed that all  users on the network could be trusted
and  that the  computers  linked  by the  Internet  were mostly  fixed

   The Internet's  design was  indifferent to whether  the information
packets added  up to  a malicious virus  or a  love letter; it  had no
provisions for doing much besides getting the data to its destination.
Nor did  it accommodate nodes  that moved --  such as PDAs  that could
connect to the Internet at any  of myriad locations. Over the years, a
slew of  patches arose:  firewalls, antivirus software,  spam filters,
and the  like. One  patch assigns  each mobile node  a new  IP address
every time it moves to a new point in the network.

   Clearly,  security  patches  aren't  keeping  pace.  That's  partly
because  different  people  use  different patches  and  not  everyone
updates them  religiously; some people  don't have any  installed. And
the most  common mobility  patch -- the  IP addresses  that constantly
change as you move around  -- has downsides. When your mobile computer
has  a  new identity  every  time it  connects  to  the Internet,  the
websites you deal with regularly  won't know it's you. This means, for
example, that  your favorite airline's Web  page might not  cough up a
reservation  form with  your  name and  frequent-flyer number  already
filled out. The constantly changing  address also means you can expect
breaks in service  if you are using the Internet to,  say, listen to a
streaming radio broadcast on your  PDA. It also means that someone who
commits a crime  online using a mobile device will  be harder to track

   In  the view  of many  experts in  the field,  there are  even more
fundamental  reasons to  be  concerned. Patches  create  an ever  more
complicated system, one that becomes harder to manage, understand, and
improve upon.   "We've been on a  track for 30  years of incrementally
making improvements to the Internet  and fixing problems that we see,"
says Larry Peterson, a computer scientist at Princeton University. "We
see vulnerability, we  try to patch it. That approach  is one that has
worked for  30 years. But there  is reason to be  concerned. Without a
long-term plan, if you are just patching the next problem you see, you
end up with  an increasingly complex and brittle  system. It makes new
services  difficult to  employ.  It  makes  it much  harder to  manage
because of the added complexity of all these point solutions that have
been added. At the same time, there is concern that we will hit a dead
end  at some  point.  There  will  be problems  we can't  sufficiently

   It's worth  remembering that despite all  of its flaws,  all of its
architectural kluginess  and insecurity and the  costs associated with
patching  it, the  Internet still  gets the  job done.  Any  effort to
implement  a better  version  faces enormous  practical problems:  all
Internet service  providers would  have to agree  to change  all their
routers and software,  and someone would have to  foot the bill, which
will likely come to many  billions of dollars. But NSF isn't proposing
to abandon the old network or  to forcibly impose something new on the
world. Rather, it essentially wants  to build a better mousetrap, show
that it's better, and allow a  changeover to take place in response to
user demand.

   To  that  end, the  NSF  effort  envisions  the construction  of  a
sprawling   infrastructure   that   could  cost   approximately   $300
million. It would  include research labs across the  United States and
perhaps link with research efforts abroad, where new architectures can
be given a full workout.  With a high-speed optical backbone and smart
routers, this test bed would  be far more elaborate and representative
than the  smaller, more limited  test beds in  use today. The  idea is
that new architectures would be battle tested with real-world Internet
traffic.  "You hope  that provides enough value added  that people are
slowly and  selectively willing  to switch, and  maybe it  gets enough
traction  that  people  will  switch  over,"  Parulkar  says.  But  he
acknowledges, "Ten  years from  now, how things  play out  is anyone's
guess. It could be a parallel infrastructure that people could use for
selective applications."

   Still, skeptics  claim that  a smarter network  could be  even more
complicated  and  thus  failure-prone  than  the  original  bare-bones
Internet.  Conventional wisdom  holds that  the network  should remain
dumb, but  that the smart devices  at its ends  should become smarter.
"I'm not happy  with the current state of affairs.  I'm not happy with
spam; I'm not happy with  the amount of vulnerability to various forms
of attack," says  Vinton Cerf, one of the  inventors of the Internet's
basic protocols, who  recently joined Google with a  job title created
just for  him: chief  Internet evangelist. "I  do want  to distinguish
that  the primary  vectors causing  a lot  of trouble  are penetrating
holes in operating systems. It's more like the operating systems don't
protect themselves very well. An argument could be made, 'Why does the
network have to do that?'"

   According to Cerf, the more you  ask the network to examine data --
to authenticate a person's identity, say, or search for viruses -- the
less efficiently  it will move the  data around. "It's  really hard to
have  a network-level thing  do this  stuff, which  means you  have to
assemble the  packets into something  bigger and thus violate  all the
protocols,"  Cerf says. "That  takes a  heck of  a lot  of resources."
Still,  Cerf  sees   value  in  the  new  NSF   initiative.  "If  Dave
Clark...sees some notions and  ideas that would be dramatically better
than what we  have, I think that's important  and healthy," Cerf says.
"I sort of wonder about something, though. The collapse of the Net, or
a major security  disaster, has been predicted for  a decade now." And
of course  no such disaster has occurred  -- at least not  by the time
this issue of Technology Review went to press.

                - = -
    Vasos-Peter John Panagiotopoulos II, Columbia'81+, Bio$trategist
          BachMozart ReaganQuayle EvrytanoKastorian
  ---{Nothing herein constitutes advice.  Everything fully disclaimed.}---
[Urb sprawl confounds terror] [Remorse begets zeal] [Windows is for Bimbos]
   [Homeland Security means private firearms not lazy obstructive guards]

Re: [Article] China Still Blocks Google.cn

__/ [Vasos-Peter] on Monday 06 February 2006 21:08 \__

Quoted text here. Click to load it

Use a filter.

Quoted text here. Click to load it

It already has, but matters seem to improve. There are still platform and
browser discriminations and cases where usability is forsaken in favour of
flash (or Flash).

Quoted text here. Click to load it

With improvement often comes complexity. The least one can do is embrace
standards, not break them or independently 'extend' them.

Quoted text here. Click to load it

The knowledge is already there and proposals for better hypermedia systems
exist too. There are also several prototypes, but bringing them into broad
use is the real challenge. Live with and accept the flawed protocols and
continue plastering them, where possible. CSS, microsformats, and XML are
examples of 'fixes' or enhancements to the Web.

Quoted text here. Click to load it

I fail to see how this addresses the issue of censorship in China. I'm
beginning to suspect it's a hit-and-run post, unless you can prove me wrong.

Quoted text here. Click to load it

Anti-virus issues are attributed to bad O/S and software design. Firewalls
and spam filters are intended to block 'junk' traffic, which you could never
truly avoid altogether. You could hinder it however.

Quoted text here. Click to load it

And who exactly will be that Big Daddy to have the opportunity to change the
world? Will it be Google and their rumoured private network? Whatever is
proposed, people will turn their backs at it, which leads to fragmentation
of content. That is the last thing the world needs.

Quoted text here. Click to load it


The Net has not collapsed because of all that 'glue' people have spewed out,
whether it's challenge/response filters one employs or the many firewalls
that intend to prevent DDOS attacks. Referrer spam, copyrights infringement
and content denial, censorship or mirroring are more issues, among many.

Quoted text here. Click to load it

Just a few comments: while the above is an intersting read, the page says
"(Uses  any  browser  - avoid stupid incompatibilities.)" and also
"(Problems? Increase font size and number of colors)". Even with fonts
resized, the text remains illegible.


Roy S. Schestowitz      |    Have you hugged your penguin today?
http://Schestowitz.com  |    SuSE Linux     |     PGP-Key: 0x74572E8E
  4:20am  up 20 days 23:36,  11 users,  load average: 0.46, 0.49, 0.55
      http://iuron.com - Open Source knowledge engine project

Site Timeline