Справочник Пользователя для Serif WebPlus X4 WPX4-HFMINI-ENG-STA-1
Модели
WPX4-HFMINI-ENG-STA-1
78 Developing Sites and
Pages
By default, the contents of each published web page (especially heading text)
will be indexed. However, in an Internet world of billions of web pages all being
constantly indexed, web developers can optimize this indexing process to allow
a site's pages to appear higher in a user's search results.
will be indexed. However, in an Internet world of billions of web pages all being
constantly indexed, web developers can optimize this indexing process to allow
a site's pages to appear higher in a user's search results.
Optimization of web pages for search engines is possible in several ways:
•
Meta Tags: Tags store search engine descriptors (i.e., keywords and
a description) for the site and/or an individual page. These tags are
used to allow better matching between entered search engine text (like
you might enter into Google) and the keywords you've associated with
your site or page. Additionally, a robots meta tag also lets you
include/exclude the site or pages from being indexed; hyperlinks to
other pages can also be prevented from being explored (crawled by
"spiders)".
a description) for the site and/or an individual page. These tags are
used to allow better matching between entered search engine text (like
you might enter into Google) and the keywords you've associated with
your site or page. Additionally, a robots meta tag also lets you
include/exclude the site or pages from being indexed; hyperlinks to
other pages can also be prevented from being explored (crawled by
"spiders)".
•
Robots: Pages (or folders) can be excluded from search-engine
indexing by using a robots file. This works in an equivalent way to the
robots meta tag but uses a text file (robots.txt) to instruct robots or
spiders what not to index. The file simply lists excluded site
page/folder references.
indexing by using a robots file. This works in an equivalent way to the
robots meta tag but uses a text file (robots.txt) to instruct robots or
spiders what not to index. The file simply lists excluded site
page/folder references.
•
Sitemaps: The opposite of the "robots" concept; pages can be
included to aid and optimize intelligent crawling/indexing. site page
references are stored in a dedicated sitemap file (sitemap.xml).
included to aid and optimize intelligent crawling/indexing. site page
references are stored in a dedicated sitemap file (sitemap.xml).