content from database

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View

Hello All,
I have a very basic query.Please help me
I am developing a website (for the first time) in which all content is
in the database. [SQL]
Now I just wanna know whether search Engines see my Database while
crawling? or they can read only html files??
all links, Dropdown lists text everything is in SQL Db
what do i do?  do i need to create static pages ??
for search engine to recognize it??
I am working in

what are files types which search engines /google bot can see

thanks for ur answers if my query has been asked/answered inpast or
somewhere else then kindly give me the link

Winners never quit and quitter never win.

Re: content from database

Quoted text here. Click to load it

Try changing to a better platform such as linux.

Quoted text here. Click to load it

Search engines see only what your webserver serves to them.

Quoted text here. Click to load it

But winner can overpay, and quitters often do not lose as much. A
point that is often lost.


Re: content from database

Quoted text here. Click to load it

They will only read HTML documents that are linked by visible URLs
from other HTML documents.

They will:

* Read HTML _documents_ (a "document" is the thing flying across the
web. It doesn't have to have come from a static _file_.

* Follow simple static URLs from these HTML documents, leading to
other HTML documents.

* Follow links to URLs that embed parameters into the URL, such as
Maybe they didn't in the past (you'll be told this), but they do now.

* Follow links in site maps or nav menus, which can include section or
database-item level navigation. If you have a number of items in your
database, then you're probably going to need at least one link for
each item that leads to one virtual HTML document for it. There will
be one or more pages, possibly grouped or structured hierarchically,
that offer lists of such links.

They particularly like:

* Long-lived stable URLs that represent the same link to the same
piece of content. Read TB-L's famous old paper, "Cool URIs don't

* Google's XML format for site maps.

* Well structured, well-formed, valid, semantic HTML.

* Image annotation in the HTML that's associated (by proximity if
nothing else) with generous descriptive text.

They won't (reliably at least):

* Execute any JavaScript!!!!  JavaScript links, JavaScript menu-
generators, JavaScript fly-out menus, these will all kill crawling

* Execute any client-side XSLT.  Generate content on the server and
serve it.

* Execute any AJAX

* Flash

* PDF with embedded links

* Follow <form> links, POST actions or anything other than simple
static GET links from <a href="./foo" >...</a>

* Type things into search boxes and press submit.

* Go to places that robots.txt excludes them from

They will also (in a rather negative way):

* Punish you for "spamdexing", deliberate or accidental.

* Treat "hidden text" as spamdexing, if there's CSS funnies going on
with colour, position or visibility to make some content invisible.

* Ignore <meta> elements

* Give "link farms" a very poor (if any) rating. This can also impact
site maps if you're not careful.

I suggest you read some of the basics on how to write your own web
crawlers / spiders. Once you understand broadly what they themselves
do, then it's quite obvious how you feed them.

Re: content from database

Thanks a million ton Andy you have been a great help

you are a genius

all my doubts and querys are cleared.


Site Timeline