Tampilkan postingan dengan label Web search engine. Tampilkan semua postingan
Tampilkan postingan dengan label Web search engine. Tampilkan semua postingan

Minggu

Meta Tags Deskripsi/Keyword Otomatis for SEO Blogger | Blog SEO Test™



Untuk urusan SEO blog khususnya blogger, terkadang kita harus melakukan uji coba(eksperimen). Dari hasil uji coba pemilik blog Blog SEO Test pemilik blog menulis tips memasang meta tag deskripsi & keyword otomatis buat SEO Blogger yg telah mereka terapkan untuk berbagi ilmu, pengalaman juga pengetahuan dengan teman-teman blogger agar blog kita semakin baik dalam peringkat search engine dan bisa bersaing (persaingan sehat) dgn blog/situs yg lebih dulu/lama melakukan blogging.





Tidak bisa dipungkiri, bahwa meta description atau meta keyword blog sangat penting untuk di pasang agar bisa memberikan penjelasan detil tentang misi, tujuan dan target dari blog kita kepada bot/crawler.


Tetapi yg seringkali dilupakan adalah - bahwa meta deskripsi / kata kunci blog juga untuk membantu pengguna internet yg melakukan pencarian di mesin pencari agar mendapatkan penggalan informasi yg melengkapi sebuah judul posting blog lebih relevant, sehingga mereka akan melakukan klik untuk masuk dan melihat isi keseluruhan dari judul dan deskripsi(penjelasan) di search results.





Seringkali blogger hanya berfikir bagaimana cara blog berada pada posisi halaman pertama paling atas di mesin pencari, tapi sebenarnya tanpa didukung meta deskripsi yg relevan dan benar, itu tidak akan mengundang mereka (pengguna iternet) melakukan klik utk mengetahui sumber posting dari judul dan deskripsi yg di tampilkan search engine results.





Ok, mari kita pelajari lebih lanjut bagaimana meta description / keyword agar bekerja dgn baik pada blog. Berikut adalah contoh meta tags description/keyword yg sy gunakan pada awal membuat blog, pernah di posting cara terbaik memasang meta tag blogger SEO friendly, di asumsikan, anda sudah tau & menggunakannya.



Kemudian kita tambahkan meta tags description / keyword menjadi seperti berikut :




    <b:include data='blog' name='all-head-content'/>

<!-- Start blogseotest.blogspot.com: Changing the Blogger Title Tag  -->

    <b:if cond='data:blog.pageType == &quot;item&quot;'>

    <title><data:blog.pageName/> | <data:blog.title/></title>

    <meta expr:content='data:blog.pageName + ", " + data:blog.title + ", " + data:blog.pageName' name='Description'/>

    <meta expr:content='data:blog.pageName + ", " + data:blog.title + ", " + data:blog.pageName' name='Keywords'/>

    <b:else/>

    <title><data:blog.pageTitle/></title>

    <meta name='DESCRIPTION' content='DESKRIPSI UNTUK BLOG ANDA'/>

    <meta name='KEYWORDS' content='KATA KUNCI/KEYWORD BLOG ANDA'/>

    </b:if>

<!-- End blogseotest.blogspot.com: Changing the Blogger Title Tag  -->

Maka hasilnya di search result seperti ini :



Untuk halaman posting :


Meta Tags Deskripsi

Untuk halaman Beranda / Home :


Meta Tags Keyword Otomatis




Nah, saat meneliti hasil dgn meta tags di atas, agaknya kurang cocok/sesuai dengan yang kita inginkan. Karena berdasarkan optimasi SEO, hasil di atas kurang memberi deskripsi
yang bisa melengkapi judul posting. Kecuali hanya menampilkan judul +
nama blog diulang 2 kali. Dan hasilnyapun kurang bagus di posisi search
engine juga kurang mengundang klik pengguna internet. Maka, penulis mencari alternativterbaik dgn menghilangkan meta deskripsi untuk halaman posting - <meta expr:content='data:blog.pageName + ", " + data:blog.title + ", " + data:blog.pageName' name='Description'/>, kenapa begitu ? :


Alasannya karena walaupun tanpa meta description,
Blogger sudah menyediakan fasilitas otomatis mengambil deskripsi dari
paragraf awal dan paragraf akhir dlm setiap posting artikel, kemudian
megambil setiap keyword/kata kunci dari keseluruhan artikel berupa snippet yg kita beri tanda dgn sengaja (huruf tebal/huruf miring) atau bot/crawler sendiri yg akan menentukan apa adanya.




Lalu, setelah penulis mencoba hilangkan meta deskripsi utk halaman posting tapi masih tetap mempertahankan meta keyword otomatis
(kata kunci berbeda pada setiap posting, menyesuaikan dgn judul) -
<meta expr:content='data:blog.pageName + ", " + data:blog.title + ", "
+ data:blog.pageName' name='Keywords'/>, maka hasilnya seperti ini :




Meta Tags Deskripsi/Keyword Otomatis


Silahkan lihat perbedaan judul dan deskripsi yg tertata rapi mulai dari
kata pengantar > kategori > tags > judul, tentu SEO friendly
(menurut penulis).


Hasil di atas tentu saja perlu sentuhan kreatif saat proses menulis
artikel pada awal paragraf dan ahkir paragraf supaya menampilkan
deskripsi berkaitan dgn judul (relevant). Dan itu tergantung pada
pilihan dan cara anda utk menyesuaikannya agar meta tags deskripsi / keyword otomatis untuk seo blogger bekerja secara sempurna. Semoga berguna, sukses!



Terima kasih kepada penulis artikel ini, Saudara:



Author




»


Agief Ikhsan





Meta Tags Deskripsi/Keyword Otomatis for SEO Blogger | Blog SEO Test™







Sabtu

Meta Tag Generator

































Meta Tag Generator Tool © SEO Chat™









Keywords

List of relevant keywords


Description

Short description of page


Enter Captcha To Continue

To prevent spamming, please enter in the numbers and letters in the box below



Report Problem with Tool.






















Senin

Similar Page Checker









Similar Page Checker

Enter First URL





Enter Second URL










How it Works




Search Engines are known to act upon websites that contain Duplicate / Similar content.



Your content could be similar to other websites on the Internet,
or pages from within your own website could be similar to each other
(usually the case with dynamic product catalog pages).



This tool allows you to determine the percentage of similarity between two pages.



The exact percentage of similarity after with a search engine may penalize you is not known,
it varies from search engine to search engine,
Your aim should be to keep your page similarity as LOW as possible.


 


Duplicate Content Filter

This article will help you understand why you might be caught in the filter, and ways to avoid it.





Duplicate Content Filter: What it is and how it works






Duplicate Content has become a huge topic of discussion lately, thanks
to the new filters that search engines have implemented. This article
will help you understand why you might be caught in the filter, and ways
to avoid it. We'll also show you how you can determine if your pages
have duplicate content, and what to do to fix it.






Search engine spam is any deceitful attempts to deliberately trick the
search engine into returning inappropriate, redundant, or poor-quality
search results. Many times this behavior is seen in pages that are
exact replicas of other pages which are created to receive better
results in the search engine. Many people assume that creating multiple
or similar copies of the same page will either increase their chances
of getting listed in search engines or help them get multiple listings,
due to the presence of more keywords.






In order to make a search more relevant to a user, search engines use a
filter that removes the duplicate content pages from the search results,
and the spam along with it. Unfortunately, good, hardworking
webmasters have fallen prey to the filters imposed by the search engines
that remove duplicate content. It is those webmasters who unknowingly
spam the search engines, when there are some things they can do to avoid
being filtered out. In order for you to truly understand the concepts
you can implement to avoid the duplicate content filter, you need to
know how this filter works. 











First, we must understand that the term "duplicate content penalty" is
actually a misnomer. When we refer to penalties in search engine
rankings, we are actually talking about points that are deducted from a
page in order to come to an overall relevancy score. But in reality,
duplicate content pages are not penalized.
Rather they are simply filtered, the way you would use a sieve to remove
unwanted particles. Sometimes, "good particles" are accidentally
filtered out.




Knowing the difference between the filter and the penalty, you can now
understand how a search engine determines what duplicate content is.
There are basically four types of duplicate content that are filtered
out:





  1. Websites with Identical Pages - These pages are considered
    duplicate, as well as websites that are identical to another website on
    the Internet are also considered to be spam. Affiliate sites with the
    same look and feel which contain identical content, for example, are
    especially vulnerable to a duplicate content filter. Another example
    would be a website with doorway pages. Many times, these doorways are
    skewed versions of landing pages. However, these landing pages are
    identical to other landing pages. Generally, doorway pages are intended
    to be used to spam the search engines in order to manipulate search
    engine results.


  2. Scraped Content - Scraped content is taking content from a web
    site and repackaging it to make it look different, but in essence it is
    nothing more than a duplicate page. With the popularity of blogs on the
    internet and the syndication of those blogs, scraping is becoming more
    of a problem for search engines.


  3. E-Commerce Product Descriptions - Many eCommerce sites out there
    use the manufacturer's descriptions for the products, which hundreds or
    thousands of other eCommerce stores in the same competitive markets are
    using too. This duplicate content, while harder to spot, is still
    considered spam.


  4. Distribution of Articles - If you publish an article, and it gets
    copied and put all over the Internet, this is good, right? Not
    necessarily for all the sites that feature the same article. This type
    of duplicate content can be tricky, because even though Yahoo and MSN
    determine the source of the original article and deems it most relevant
    in search results, other search engines like Google may not, according
    to some experts.



So, how does a search engine's duplicate content filter work?
Essentially, when a search engine robot crawls a website, it reads the
pages, and stores the information in its database. Then, it compares
its findings to other information it has in its database. Depending
upon a few factors, such as the overall relevancy score of a website, it
then determines which are duplicate content, and then filters out the
pages or the websites that qualify as spam. Unfortunately, if your
pages are not spam, but have enough similar content, they may still be
regarded as spam.


There are several things you can do to avoid the duplicate content
filter. First, you must be able to check your pages for duplicate
content. Using our Similar Page Checker,
you will be able to determine similarity between two pages and make
them as unique as possible. By entering the URLs of two pages, this
tool will compare those pages, and point out how they are similar so
that you can make them unique.


Since you need to know which sites might have copied your site or pages,
you will need some help. We recommend using a tool that searches for
copies of your page on the Internet: www.copyscape.com.
Here, you can put in your web page URL to find replicas of your page
on the Internet. This can help you create unique content, or even
address the issue of someone "borrowing" your content without your
permission.


Let's look at the issue regarding some search engines possibly not
considering the source of the original content from distributed
articles. Remember, some search engines, like Google, use link
popularity to determine the most relevant results. Continue to build
your link popularity, while using tools like www.copyscape.com
to find how many other sites have the same article, and if allowed by
the author, you may be able to alter the article as to make the content
unique.


If you use distributed articles for your content, consider how relevant
the article is to your overall web page and then to the site as a whole.
Sometimes, simply adding your own commentary to the articles can be
enough to avoid the duplicate content filter; the Similar Page Checker
could help you make your content unique. Further, the more relevant
articles you can add to compliment the first article, the better.
Search engines look at the entire web page and its relationship to the
whole site, so as long as you aren't exactly copying someone's pages,
you should be fine.


If you have an eCommerce site, you should write original descriptions
for your products. This can be hard to do if you have many products,
but it really is necessary if you wish to avoid the duplicate content
filter. Here's another example why using the Similar Page Checker
is a great idea. It can tell you how you can change your descriptions
so as to have unique and original content for your site. This also
works well for scraped content also. Many scraped content sites offer
news. With the Similar Page Checker, you can easily determine where the
news content is similar, and then change it to make it unique.


Do not rely on an affiliate site which is identical to other sites or
create identical doorway pages. These types of behaviors are not only
filtered out immediately as spam, but there is generally no comparison
of the page to the site as a whole if another site or page is found as
duplicate, and get your entire site in trouble.


The duplicate content filter is sometimes hard on sites that don't
intend to spam the search engines. But it is ultimately up to you to
help the search engines determine that your site is as unique as
possible. By using the tools in this article to eliminate as much
duplicate content as you can, you'll help keep your site original and
fresh.

 




Search Engine Spider Simulator








How it Works





A lot of Content and Links displayed on a webpage may not actually
be visible to the Search Engines, eg. Flash based content, 
content generated through javascript, 
content displayed as images etc.







This tool Simulates a Search Engine by displaying the contents
of a webpage exactly how a Search Engine would see it.







It also displays the hyperlinks that will be followed (crawled)
by a Search Engine when it visits the particular webpage.





See Your Site With the Eyes of a Spider

The article explains how Search Engines view a Webpage.







See Your Site With the Eyes of a Spider













Making efforts to optimize a site is great but what counts is how
search engines see your efforts. While even the most careful
optimization does not guarantee tops position in search results, if
your site does not follow basic search engine optimisation truths, then it is more than
certain that this site will not score well with search engines. One
way to check in advance how your SEO efforts are seen by search
engines is to use a search
engine simulator
.




Spiders Explained


Basically all search engine spiders function on the same principle
– they crawl the Web and index pages, which are stored in a
database and later use various algorithms to determine page ranking,
relevancy, etc of the collected pages. While the algorithms of
calculating ranking and relevancy widely differ among search engines,
the way they index sites is more or less uniform and it is very
important that you know what spiders are interested in and what they
neglect.



Search engine spiders are robots and they do not read your pages
the way a human does. Instead, they tend to see only particular stuff
and are blind for many extras (Flash, JavaScript) that are intended
for humans. Since spiders determine if humans will find your site, it
is worth to consider what spiders like and what don't.




Flash, JavaScript, Image Text or Frames?!



Flash, JavaScript and image text are NOT visible to search
engines. Frames are a real disaster in terms of SEO ranking. All of
them might be great in terms of design and usability but for search
engines they are absolutely wrong. An incredible mistake one can make
is to have a Flash intro page (frames or no frames, this will hardly
make the situation worse) with the keywords buried in the animation.
Check with the Search
Engine Spider Simulator
tool a page with Flash and images (and
preferably no text or inbound or outbound hyperlinks) and you will
see that to search engines this page appears almost blank.



Running your site through this simulator will show you more than
the fact that Flash and JavaScript are not SEO favorites. In a way,
spiders are like text browsers and they don't see anything that is
not a piece of text. So having an image with text in it means nothing
to a spider and it will ignore it. A workaround (recommended as a SEO
best practice) is to include meaningful description of the image in
the ALT attribute of the tag but be careful not to use
too many keywords in it because you risk penalties for keyword
stuffing. ALT attribute is especially essential, when you use links
rather than text for links. You can use ALT text for describing what
a Flash movie is about but again, be careful not to trespass the line
between optimization and over-optimization.




Are Your Hyperlinks Spiderable?


The search engine spider simulator can be of great help when
trying to figure out if the hyperlinks lead to the right place. For
instance, link exchange websites often put fake links to your site
with _javascript (using mouse over events and stuff to make the link
look genuine) but actually this is not a link that search engines
will see and follow. Since the spider simulator would not display
such links, you'll know that something with the link is wrong.



It is highly recommended to use the <noscript> tag, as
opposed to _javascript based menus. The reason is that _javascript
based menus are not spiderable and all the links in them will be
ignored as page text. The solution to this problem is to put all menu
item links in the <noscript> tag. The <noscript> tag can
hold a lot but please avoid using it for link stuffing or any other
kind of SEO manipulation.



If you happen to have tons of hyperlinks on your pages (although
it is highly recommended to have less than 100 hyperlinks on a page),
then you might have hard times checking if they are OK. For instance,
if you have pages that display “403 Forbidden”, “404
Page Not Found
” or similar errors that prevent the spider from
accessing the page, then it is certain that this page will not be
indexed. It is necessary to mention that a spider simulator does not
deal with 403 and 404 errors because it is checking where links lead
to not if the target of the link is in place, so you need to use
other tools for checking if the targets of hyperlinks are the
intended ones.




Looking for Your Keywords


While there are specific tools, like the Keyword
Playground
or the Website
Keyword Suggestions
, which deal with keywords in more detail,
search engine spider simulators also help to see with the eyes of a
spider where keywords are located among the text of the page. Why is
this important? Because keywords in the first paragraphs of a page
weigh more than keywords in the middle or at the end. And if keywords
visually appear to us to be on the top, this may not be the way
spiders see them. Consider a standard Web page with tables. In this
case chronologically the code that describes the page layout (like
navigation links or separate cells with text that are the same
sitewise) might come first and what is worse, can be so long that the
actual page-specific content will be screens away from the top of the
page. When we look at the page in a browser, to us everything is fine
– the page-specific content is on top but since in the HTML
code this is just the opposite, the page will not be noticed as
keyword-rich.




Are Dynamic Pages Too Dynamic to be Seen At All


Dynamic pages (especially ones with question marks in the URL) are
also an extra that spiders do not love, although many search engines
do index dynamic pages as well. Running the spider simulator will
give you an idea how well your dynamic pages are accepted by search
engines. Useful suggestions how to deal with search engines and
dynamic URLs can be found in the Dynamic
URLs vs. Static URLs
article.




Meta Keywords and Meta Description


Meta keywords and meta description, as the name implies, are to be
found in the <META> tag of a HTML page. Once meta keywords and
meta descriptions were the single most important criterion for
determining relevance of a page but now search engines employ
alternative mechanisms for determining relevancy, so you can safely
skip listing keywords and description in Meta tags (unless you want
to add there instructions for the spider what to index and what not
but apart from that meta tags are not very useful anymore).


 source: http://www.webconfs.com/spider-view-article-9.php
























Visualisasi Mandala Aksara Hanacaraka Dibalik

Mandala Aksara Hanacaraka