Do People Really Believe What They Read Online? Unfortunately, sometimes the answer is yes, and when your reputation is at risk, anything which helps to filter the web’s truths and lies is a welcome development. In a research paper – Knowledge-Based Trust, Google announced its intention to implement such a filter. To make a truth based algorithm a reality and rank pages based on the accuracy of content.
According to RT.news Google isn’t alone, “The news comes at a time when app developers are working on ways to verify all kinds of online content including – mailboxes, webpages, and applications – by cross-referencing online information with aggregates like PolitiFact, FactCheck.org, and Snopes.”
Working to improve quality and relevance of content across the web
Google’s new algorithm, currently under development, won’t just rely on third-party signals such as links, it will search for facts and determine their accuracy. It is consistent with other major algorithm updates Google has released in the past few years such as Panda, Penguin and Hummingbird. All reward high quality, fresh content and improved user experience. An ethos we share at Igniyte.
Igniyte works with clients to enhance online reputations by helping to promote regular, quality content that is relevant. We support clients to ensure that new articles are published online: on their websites, blogs, and social media profiles; across industry forums; on social networks, and via the press.
Thoughtful use of social media, is and will no doubt, remain important. Social media assets rank well in Google, because the search engine recognises them as valuable. Refreshing and sharing social media content helps a business to be seen, as well as allowing a business to communicate with networks and industry figures. We provide best practice advice and hands-on support to ensure clients’ social and professional media profiles are current and relevant.
Additionally, PR is a key tool helping you to grow your business online and promote your ‘assets’ – your websites, blogs and social media, on relevant news sites and business forums. We work with clients to ensure any existing PR is seen, and to create and promote press releases.
How do algorithms rank sites?
In the early days of the web, a search engine would return a list of pages that were relevant in that they matched the search term. Although they included the right words, they may have offered limited value to the user.
Over time things have improved and now hundreds of factors influence relevance. However, search engines also typically assume that the more popular a site, page, or document, the more valuable the information it contains must be. Popularity implies authority in a particular topic. But as Search Engine Journal points out, “Popularity does not always mean a web page contains accurate information. A good example may be celebrity gossip websites.”
Popularity and relevance aren’t determined manually. Instead, the engines employ mathematical equations (algorithms) to sort the wheat from the chaff (relevance), and then to rank the wheat in order of quality (popularity).
According to a New Scientist report, “the new algorithm draws on Google’s Knowledge Vault a collection of 2.8 billion facts extracted from the Internet.” By checking pages against that database, and cross-referencing related facts, Google believes the algorithm could assign each page and website a truth score.
The algorithm searches for three factors it calls “Knowledge Triples,” consisting of a subject, a predicate, and an object. A subject is a “real-world entity” such as people, places or things. A predicate describes an attribute of that entity. An object is “an entity, a string, a numerical value, or a date.” Those three attributes together form a fact. An example of a triple is: Apples grow on trees.
Pages with a high proportion of false claims would be bumped down in the search results.
Searching for knowledge triples is not without problems. The amount of irrelevant triples that are off topic is just one issue. Google is clear the new algorithm is in research stage with plenty of issues to be ironed out before it can be used.