webmasterresources.nl

traffic-builders.com

seroundtable.com

  • nieuw: 15/01/2026 Daily Search Forum Recap: January 15, 2026
    Here is a recap of what happened in the search forums today...nieuw:
  • nieuw: 15/01/2026 Personal Intelligence In Gemini & Soon In Google Search AI Mode
    Google is rolling out what it calls "Personal Intelligence" in the Gemini app and it will soon also come to Google Search within AI Mode. Google announced that Personal Intelligence in Gemini "connects Gmail, Photos, YouTube and Search in a single tap."nieuw:
  • nieuw: 15/01/2026 Google's John Mueller Says Universal Commerce Protocol Won't Kill SEO
    Earlier this week, Google announced Universal Commerce Protocol (UCP) and it has a lot of the industry worried. Once SEO named Ramon Eijkemans said, "I hate to bring this, but SEOs: we're gonna be f$@*ed." John Mueller from Google responded on Bluesky and said, "I disagree."nieuw:
  • nieuw: 15/01/2026 Google: Favicons In Search Not Impacted By Google Core Updates
    While a number of Google Search features are impacted by Google's core update, favicons showing up or not is not one of them. Yes, your site can drop in the rankings, or not show up in Google Discover, or be removed from Top Stories and other features but favicons are not impacted by core updates.nieuw:
  • nieuw: 15/01/2026 New Google AI Mode Ads: Explore Guides and Articles
    We have seen ads in AI Mode before but there may be a new Google Ads format coming to AI Mode results, not the just the Direct Offer ads, but Explore guides and article ads.nieuw:

Meer »

searchenginejournal.com

Meer »

indeedseo.com

backlinko.com

webeffectief.com

  • 14/12/2025 Sneller bloggen? Maak een eigen blogpost template
    Slimme contentmaker en bloggers zijn altijd op zoek naar manieren om efficiënter te werken. En gebruikers van de WordPress-blokeditor hebben zeker een voetje voor. Zij kunnen immers in no time een eigen blogpost template maken, en zo sneller artikels publiceren. Leestijd: ca. 3 minuten Een eigen blogpost template? Bedoeling is om een soort “standaardsjabloon”, een […]
  • 07/12/2025 Ontdek De Nieuwe Bereikformule – gratis te downloaden
    Waarschijnlijk hoef ik het je niet te vertellen. Zowat iedereen die vandaag online content maakt heeft last om die content onder de aandacht te brengen bij de juiste mensen. Wat werkt er vandaag eigenlijk nog? Download gratis De Nieuwe Bereikformule en je komt het te weten. Als je online content maakt wil je natuurlijk graag […]
  • 03/12/2025 Blog schrijven met ChatGPT: 14 pro tips en voorbeeld prompts voor ’n topartikel
    Ai-tools inzetten als geautomatiseerde contentmachines, het is altijd een beetje riskant. Maar een blog schrijven met ChatGPT als superkrachtige schrijfbuddy? Dat moet zeker kunnen. Waarschijnlijk kan je zo sneller/vaker bloggen. Leestijd: ca. 10 minuten Na dit artikel: Blog schrijven mét ChatGPT, niet door ChatGPT En ja, ik blijf het maar herhalen bij wijze van disclaimer: […]
  • 30/11/2025 Nog doeltreffender bloggen? Gebruik een logboek
    Leestijd: ca. 3 minuten Bloggen is leuk, maar het mag allemaal niet té veel tijd in beslag nemen. Akkoord? Wel, dan komt het erop aan jezelf en je backoffice goed te organiseren. Met een digitaal logboek voor je blogproject bijvoorbeeld. Na dit artikel: Wat kan er in het logboek van je blog? Bedoeling is om […]
  • 23/11/2025 Blogzone: een verzameling actuele Nederlandstalige blogs
    Leestijd: minder dan 2 minuten Met de Blogzone keer ik terug naar waar het allemaal begon. Het verzamelen van Blogprojecten die – volgens mij – de moeite van het ontdekken waard zijn. Het is geen geheim dat “kleine” blogs het niet onder de markt hebben vandaag. Dus het leek me een goed idee om mijn […]

kgom.nl

searchengineland.com

searchengineland.com

andrescholten.nl

Meer »

seoblackhat.com

  • 23/04/2016 What Happened with SEO Black Hat?
    It’s been 5 years since I wrote a post. I’ve been on lifecation for the last 3 and a half years (not working and living the dream). But there’s an exciting reason I’m back: A Yuuuuuge exploit that I’m going to share, but I’ll get to that in due time. First, let’s talk about the […]
  • 17/04/2016 Hello Again. World.
    Zombie SEO Black Hat and QuadsZilla about to become reanimated.
  • 05/02/2011 Google Lied about Manually Changes
    We Cannot Manually Change Results . . . But we did.
  • 03/02/2011 Clicksteam For Dummies: How The Ranking Factor Works
    Since the majority of people can’t seem to figure out how clickstream data could be used as a Search Engine Ranking Factor, without ever scraping the actual page, I’ll give you a hint.
  • 01/02/2011 Bing is Just Better
    Google is scared. They call Bing’s Results “a Cheap imitation”, but the fact is that Bing is now consistently delivering better results.

seroundtable.com

Meer »

bluehatseo.com

  • 09/06/2011 Guest Post: How To Start Your First Media Buy
    This post was written by a good friend of mine and one of the best media buyers I know Max Teitelbaum. He owns WhatRunsWhere and has previously offered to write a guest post on the subject for you guys, but with all the buzz and relevancy of his new WhatRunsWhere tool I requested he write [...]
  • 12/07/2010 Open Questions: When To Never Do Article Submissions
    Got a question in my E-Commerce SEO Checklist post from Rania, who didn’t leave me a link for credit. “4. Steal your competitors articles and product reviews and do article distribution.” You recommend STEALING articles from competitors as an advanced SEO tactic?! Seriously?! How about recommending that users create their own unique content in order to increase their [...]
  • 09/07/2010 SEO Checklist for E-Commerce Sites
    Answering a question on Wickedfire here. If you own an Ecommerce site and don’t know where to begin on the SEO go through this check list. In total, it’ll cost less than $500. 1. Signup with all related forums. Put your site in the footer links and go through answering product related questions on a weekly basis. 2. [...]
  • 22/06/2010 How To Take Down A Competitors Website: Legally
    They stole your articles didn’t they? You didn’t even know until they outranked you. They jacked your $50 lander without a single thought to how you’d feel? Insensitive pricks They violated your salescopy with synonyms. Probably didn’t even use a rubber. They rank #8 and you rank #9 on EVERY KEYWORD! bastards! Listen, why don’t you just relax. Have a seat over there [...]
  • 11/11/2009 Addon Domain Spamming With Wordpress and Any Other CMS
    I got this question from Primal in regards to my post on Building Mininets Eli, I like the post and your entire site. Thanks for sharing your knowledge. One thing confuses me about this particular tactic. Where are you getting the content from? You mentioned Audioscrobbler and Youtube API but I am not focusing on a music [...]

Meer »

seobook.com

  • 16/05/2025 A Declining Internet

    For as broad and difficult of a problem running a search engine is and how many competing interests are involved, when Matt Cutts was at Google they ran a pretty clean show. Some of what they did before the algorithms could catch up was of course fearmongering (e.g. if you sell links you might be promoting fake brain cancer solutions) but Google generally did a pretty good job with the balance between organic and paid search.

    Early in search ads were clearly labeled, and then less so. Ad density was light, and then less so.

    It appears as a somewhat regular set of compounded growth elements on the stock chart, but it is a series of decisions. What to measure, what to optimize, what to subsidize, and what to sacrifice.

    Savvy publishers could ride whatever signals were over-counted (keyword repetition, links early on, focused link anchor text, keyword domains, etc.) and catch new tech waves (like blogging or select social media channels) to keep growing as the web evolved. In some cases what was once a signal of quality would later become an anomaly ... the thing that boosted your rank for years eventually started to suppress your rank as new signals were created and signals composed of ratios of other signals got folded into ranking and re-ranking.

    Over time as organic growth became harder the money guys started to override the talent, like in 2019 when a Google yellow flag had the ads team promote the organic search and Chrome teams intentionally degrade user experience to drive increased search query volume:

    “I think it is good for us to aspire to query growth and to aspire to more users. But I think we are getting too involved with ads for the good of the product and company.” - Googler Ben Gnomes

    A healthy and sustainable ecosystem relies upon the players at the center operating a clean show.

    If they decide not to, and eat the entire pie, things fall apart.

    One set of short-term optimizations is another set of long-term failures.

    The specificity of an eHow article gives it a good IR score, and AdSense pays for a thousand similar articles to be created, then the "optimized" ecosystem gets a shallow sameness, which requires creating new ranking signals.

    In the last quarter, Q1 of 2025, it was the first time the Google partner network represented less than 10% of Google ad revenues in the history of the company.

    Google's fortunes have never been more misaligned with web publishers than they are today. This statement becomes more true each day that passes.

    That ecosystem of partners is hundreds of thousands of publishers representing millions of employees. Each with their own costs and personal optimization decisions.

    Publishers create feature works which are expensive, and then cross-subsidize the most expensive work with cheaper & more profitable works. They receive search traffic to some type of pages which are seemingly outperforming today and think that is a strategy which will help them into the future, though hitting the numbers today can mean missing them next year, as the ranking signal mix squeezes out profits from those "optimizations," and what led to higher traffic today becomes part of a negative sitewide classifier the lowers rankings across the board in the future.

    Last August Googler Ryan Moulton published a graph of newspaper employees from 2010 until now, showing about a 70% decline. The 70% decline also doesn't factor in that many mastheads have been rolled up by private equity players which lever them up on debt and use all the remaining blood to pay interest payments - sometimes to themselves - while stiffing losses from the underfunded pension plans on other taxpayers.

    The quality of the internet that we've enjoyed for the last 20 years was an overhand from when print journalism still made money. The market for professionally written text is now just really small, if it exists at all.

    Ryan was asked "what do you believe is the real cause for the decline in search quality, then? Or do you think there hasn't been a decline?"

    His now deleted response stated "It's complicated. I think it's both higher expectations and a declining internet. People expect a lot more from their search results than they used to, while the market for actually writing content has basically disappeared."

    The above is the already baked cake we are starting from.

    The cake were blogs were replaced with social feeds, newspapers got rolled up by private equity players, larger broad "authority" branded sites partner with money guys to paste on affiliate sections, while indy affiliate sites are buried ... the algorithmic artifacts of Google first promoting the funding of eHow, then responding to the success of entities like Demand Media with Vince, Panda, Penguin, and the Helpful Content Update.

    The next layer of the icky blurry line is AI.

    “We have 3 options: (1) Search doesn’t erode, (2) we lose Search traffic to Gemini, (3) we lose Search traffic to ChatGPT. (1) is preferred but the worst case is (3) so we should support (2)” - Google's Nick Fox

    So long as Google survives, everything else is non-essential. ;)

    StackOverflow questions over time, source SEDE; sadface, lunch has been eaten pic.twitter.com/tXZShoIBfG— Marc Gravell (@marcgravell) May 15, 2025

    AI overview distribution is up 116% over the past couple months.

    Yep, I have seen this too. AIOs surged with the March 2025 core update -> AI Overviews Have Doubled (25M AIOs Analyzed)

    "The total number of AI Overviews grew by 116% between March 12th (pre-update) and May 6th, according to our database."

    And: "Reddit now appears in 5.5% of… pic.twitter.com/W0FxWo3qlQ— Glenn Gabe (@glenngabe) May 13, 2025

    Google features Reddit *a lot* in their search results. Other smaller forums, not so much. A company consisting of many forums recently saw a negative impact from algorithm updates earlier this year.

    VerticalScope, behind 1,200+ online communities, just confirmed Google updates have negatively impacted their business.

    Some highlights from their Q1 '25 earnings update:

    - Revenue decreased 8% to $13.6M
    - Increased consulting costs for "AI initiatives and SEO optimizations"… pic.twitter.com/1OYsxEKm9e— Glen Allsopp (@ViperChill) May 14, 2025

    Going back to that whole bit about not fully disclosing economic incentives risks promoting brain cancer ... well how are AI search results constructed? How well do they cite their sources? And are the sources they cited also using AI to generate content?

    "its gotten much worse in that "AI" is now, on many "search engines", replacing the first listings which obfuscates entirely where its alleged "answer" came from, and given that AI often "hallucinates", basically making things up to a degree that the output is either flawed or false, without attribution as to how it arrived at that statement, you've essentially destroyed what was "search." ... unlike paid search which at least in theory can be differentiated (assuming the search company is honest about what they're promoting for money) that is not possible when an alleged "AI" presents the claimed answers because both the direct references and who paid for promotion, if anyone is almost-always missing. This is, from my point of view anyway, extremely bad because if, for example, I want to learn about "total return swaps" who the source of the information might be is rather important -- there are people who are absolutely experts (e.g. Janet Tavakoli) and then there are those who are not. What did the "AI" response use and how accurate is its summary? I have no way to know yet the claimed "answer" is presented to me." - Karl Denninger

    The eating of the ecosystem is so thorough Google now has money to invest in Saudi Arabian AI funds.

    Periodically ViperChill highlights big media conglomerates which dominate the Google organic search results.

    One of the strongest horizontal publishing plays online has been IAC. They've grown brands like Expedia, Match.com, Ticketmaster, Lending Tree, Vimeo, and HSN. They always show up in the big publishers dominating Google charts. In 2012 they bought About.com from the New York Times and broke About.com into vertical sites like The Spruce, Very Well, The Balance, TripSavvy, and Lifewire. They have some old sites like Investopedia from their 2013 ValueClick deal. And then they bought out the magazine publisher Meredith, which publishes titles like People, Better Homes and Gardens, Parents, and Travel + Leisure. What does their performance look like? Not particularly good!

    DDM reported just 1% year-over-year growth in digital advertising revenue for the quarter. It posted $393.1 million in overall revenue, also up 1% YOY. DDM saw a 3% YOY decline in core user sessions, which caused a dip in programmatic ad revenue. Part of that downturn in user engagement was related to weakening referral traffic from search platforms. For example, DDM is starting to see Google Search’s AI Overviews eat into its traffic.

    Google's early growth was organic through superior technology, then clever marketing via their toolbar, and later a set of forced bundlings on Android combined with payolla for default search placements in third party web browsers. A few years ago the UK government did a study which claimed if Microsoft gave Apple a 100% revshare on Bing they still couldn't compete with the Google bid for default search placement in Apple Safari.

    Microsoft offered over a 100% ad revshare to set Bing as the default search engine and went so far as discussing selling Bing to Apple in 2018 - but Apple stuck with Google's deal.

    In search, if you are not on Google you don't exist.

    As Google grew out various verticals they also created ranking signals which in some cases were parasitical, or in other cases purely anticompetitive. To this day Google is facing billions in of dollars in new suits across Europe for their shopping search strategy.

    The Obama administration was an extension of Google, so the FTC gave Google a pass in spite of discovering some clearly anticompetitive behavior with real consumer harm. The Wall Street Journal published a series of articles from getting half the pages of the FTC research into Google's conduct:

    "Although Google originally sought to demote all comparison shopping websites, after Google raters provided negative feedback to such a widespread demotion, Google implemented the current iteration of its so-called 'diversity' algorithm."

    What good is a rating panel if you get to keep re-asking the questions again in a slightly different way until you get the answer you want? And then place a lower quality clone front and center simply because it is associated with the home team?

    "Google took unusual steps to "automatically boost the ranking of its own vertical properties above that of competitors,” the report said. “For example, where Google’s algorithms deemed a comparison shopping website relevant to a user’s query, Google automatically returned Google Product Search – above any rival comparison shopping websites. Similarly, when Google’s algorithms deemed local websites, such as Yelp or CitySearch, relevant to a user’s query, Google automatically returned Google Local at the top of the [search page].”"

    The forced ranking of house properties is even worse when one recalls they were borrowing third party content without permission to populate those verticals.

    Now with AI there is a blurry line of borrowing where many things are simply probabilistic. And, technically, Google could claim they sourced content from a third party which stole the original work or was a syndicator of it.

    As Google kept eating the pie they repeatedly overrode user privacy to boost their ad income, while using privacy as an excuse to kneecap competing ad networks.

    Remember the old FTC settlement over Google's violation of Safari browser cookies? That is the same Google which planned on depreciating third party cookies in Chrome and was even testing hiding user IP addresses so that other ad networks would be screwed. Better yet, online business might need to pay Google a subscription fee of some sort to efficiently filter through the fraud conducted in their web browser.

    HTTPS everywhere was about blocking data leakage to other ad networks.

    AMP was all about stopping header bidding. It gave preferential SERP placement in exchange for using a Google-only ad stack.

    Even as Google was dumping tech costs on publishers, they were taking a huge rake of the ad revenue from the ad serving layer: "Google's own documents show that Google has siphoned off thirty-five cents of each advertising dollar that flows through Google's ad tech tools."

    After acquiring DoubleClick to further monopolize the online ad market, Google merged user data for their own ad targeting, while hashing the data to block publishers from matching profiles:

    "In 2016, as part of Project Narnia, Google changed that policy, combining all user data into a single user identification that proved invaluable to Google's efforts to build and maintain its monopoly across the ad tech industry. ... After the DoubleClick acquisition, Google "hashed" (i.e., masked) the user identifiers that publishers previously were able to share with other ad technology providers to improve internet user identification and tracking, impeding their ability to identify the best matches between advertisers and publisher inventory in the same way that Google Ads can. Of course, any puported concern about user privacy was purely pretextual; Google was more than happy to exploit its users' privacy when it furthered its own economic interests."

    This chat was deep in the spoliation evidence of the cases (it wasn't purged) to demonstrate the substantive discussions Google senior execs (Sissie) have over chat. My read is she is weighing in on the same issue from a different forum (privacy law in EU). 2/2 pic.twitter.com/11EBO7xqqh— Jason Kint (@jason_kint) May 9, 2025

    In terms of cost, I really don't think the O&O impact has been understood too, especially on YouTube. - Googler David Mitby

    Did we tee up the real $ price tag of privacy? - Googler Sissie Hsiao

    Google continues to spend billions settling privacy-related cases. Settling those suits out of court is better than having full discovery be used to generate a daisy chain of additional lawsuits.

    As the Google lawsuits pile up, evidence of how they stacked the deck becomes more clear.

  • 16/05/2025 Google's Hyung-Jin Kim Shares Google Search Ranking Signals

    On February 18, 2025 Google's Hyung-Jin Kim was interviewed about Google's ranking signals. Below are notes from that interview.

    "Hand Crafting" of Signals

    Almost every signal, aside from RankBrain and DeepRank (which are LLM-based) are hand-crafted and thus able to be analyzed and adjusted by engineers.

    • To develop and use these signals, engineers look at data and then take a sigmoid or other function and figure out the threshold to use. So, the "hand crafting" means that Google takes all those sigmoids and figures out the thresholds.
      • In the extreme hand-crafting means that Google looks at the relevant data and picks the mid-point manually.
      • For the majority of signals, Google takes the relevant data (e.g., webpage content and structure, user clicks, and label data from human raters) and then performs a regression.

    Navboost. This was HJ's second signal project at Google. HJ has many patents related to Navboost and he spent many years developing it.

    ABC signals. These are the three fundamental signals. All three were developed by engineers. They are raw, ...

    • Anchors (A) - a source page pointing to a target page (links). ...
    • Body (B) - terms in the document ...
    • Clicks (C) - historically, how long a user stayed at a particular linked page before bouncing back to the SERP. ...

    ABC signals are the key components of topicality (or a base score), which is Google's determination of how the document is relevant to a query.

    • T* (Topicality) effectively combines (at least) these three ranking signals in a relatively hand-crafted way. ... Google uses to judge the relevance of the document based on the query term.
    • It took a significant effort to move from topicality (which is at its core a standard "old style" information retrieval ("IR") metric) ... signal. It was in a constant state of development from its origin until about 5 years ago. Now there is less change.
      • Ranking development (especially topicality) involves solving many complex matheivlatical problems.
      • For topicality, there might be a team of ... engineers working continuously on these hard problems within a given project.

    The reason why the vast majority of signals are hand-crafted is that if anything breaks Google knows what to fix. Google wants their signals to be fully transparent so they can trouble-shoot them and improve upon them.

    • Microsoft builds very complex systems using ML techniques to optimize functions. So it's hard to fix things - e.g., to know where to go and how to fix the function. And deep learning has made that even worse.
    • This is a big advantage of Google over Bing and others. Google faced many challenges and was able to respond.
      • Google can modify how a signal responds to edge cases, for example in response to various media/public attention challenges ...
      • Finding the correct edges for these adjustments is difficult, but would be easy to reverse engineer and copy from looking at the data.

    Ranking Signals "Curves"

    Google engineers plot ranking signal curves.

    The curve fitting is happening at every single level of signals.

    lf Google is forced to give information on clicks, URLs, and the query, it would be easy for competitors to figure out the high-level buckets that compose the final IR score. High- level buckets are:

    • ABC — topicality
      • Topicality is connected to a given query
    • Navboost
    • Quality
      • Generally static across multiple queries and not connected to a specific query.
      • However, in some cases Quality signal incorporates information from the query in addition to the static signal. For example, a site may have high quality but general information so a query interpreted as seeking very narrow/technical information may be used to direct to a quality site that is more technical.

    Q* (page quality (i.e., the notion of trustworthiness)) is incredibly important. lf competitors see the logs, then they have a notion of “authority” for a given site.

    Quality score is hugely important even today. Page quality is something people complain about the most.

    • HJ started the page quality team ~ 17 years ago.
    • That was around the time when the issue with content farms appeared.
      • Content farms paid students 50 cents per article and they wrote 1000s of articles on each topic. Google had a huge problem with that. That's why Google started the team to figure out the authoritative source.
      • Nowadays, people still complain about the quality and AI makes it worse.

    Q* is about ... This was and continues to be a lot of work but could be easily reverse engineered because Q is largely static and largely related to the site rather than the query.

    Other Signals

    • eDeepRank. eDeepRank is an LLM system that uses BERT, transformers. Essentially, eDeepRank tries to take LLM-based signals and decompose them into components to make them more transparent. HJ doesn't have much knowledge on the details of eDeepRank.
    • PageRank. This is a single signal relating to distance from a known good source, and it is used as an input to the Quality score.
    • ... (popularity) signal that uses Chrome data.

    Search Index

    • HJ's definition is that search index is composed of the actual content that is crawled - titles and bodies and nothing else, i.e., the inverted index.
    • There are also other separate specialized inverted indexes for other things, such as feeds from Twitter, Macy's etc. They are stored separately from the index for the organic results. When HJ says index, he means only for the 10 blue links, but as noted below, some signals are stored for convenience within the search index.
    • Query-based signals are not stored, but computed at the time of query.
      • Q* - largely static but in certain instances affected by the query and has to be computed online (see above)
    • Query-based signals are often stored in separate tables off to the side of the index and looked up separately, but for convenience Google stores some signals in the search index.
      • This way of storing the signals allowed Google to ...

    User-Side Data

    By User Side Data, Google's search engineers mean user interaction data, not the content/data that was created by users. E.g., links between pages that are created by people are not User Side data.

    Search Features

    • There are different search features - 10 blue links as well as other verticals (knowledge panels, etc). They all have their own ranking.
    • Tangram (fka Tetris). HJ started the project to create Tangram to apply the basic principle of search to all of the features.
    • Tangram/Tetris is another algorithm that was difficult to figure out how to do well but would be easy to reverse engineer if Google were required to disclose its click/query data. By observing the log data, it is easy to reverse engineer and to determine when to show the features and when to not.
    • Knowledge Graph. Separate team (not H/’s) was involved in its development.
    • Knowledge Graph is used beyond being shown on the SERP panel.
      • Example — “porky pig” feature. If people query about the relation of a famous person, Knowledge Graph tells traditional search the name of the relation and the famous person, to improve search results - Barack Obama's wife's height query example.
    • Self-help suicide box example. Incredibly important to figure it out right, and tons of work went into it, figuring out the curves, threshold, etc. With the log data, this could be easily figured out and reverse engineered, without having to do any of the work that Google did.

    Reverse Engineering of Signals

    There was a leak of Google documents which named certain components of Google's ranking system, but the documents don't go into specifics of the curves and thresholds.

    The documents alone do not give you enough details to figure it out, but the data likely does.

  • 11/11/2023 Google Antitrust Leaked Documents

    User interaction signals

    Create relevancy signals out of user read, clicks, scrolls, and mouse hovers.

    Not how search works

    Search does not work by delivering results which match a query that ends at the user. This view of search is incomplete.

    How search works

    The flow of the engagement metrics from the end user / searcher back to the search engine helps the search engine refine the result set.

    Fake document understanding

    Google looks at the actions of searchers much more than they look at raw documents. If documents elicit a positive reaction from searchers that is proof the document is good. If a document elicits negative reactions then they presume the document is bad.

    Google learns from searchers

    The result set is designed not just to serve the user, but to create an interaction set where Google can learn from the user & incorporate logged user data into influencing the rankings for future searches.

    Dialog is the source of the magic

    Each user interaction gives Google data to refine their ranking algorithms and make search smarter.

    Happy users provide informed user interactions

    Informed user interactions are part of a virtuous cycle which allows Google to better train their models & understand language patterns, then use that understanding to deliver a more relevant search result set.

    Prior user behavior is used as a baseline.

    Google is not pushing search personalization anywhere near as hard as they once did (at least not outside of localization) but in the above Google states prior selections is one of Google's strongest ranking for rankings.

    Once again rather than understanding documents directly they can consider the users who chose the documents. Users can be maps based on actions outside of standard demographics so that more like users are given more weight on their user interactions with the result set choices.

    Google revenue growth is consistent

    Core Google ad revenue grows much more consistently than any other large media business, growing at 20% to 22% year after year for 8 in 9 years with the one outlier year being 30% growth.

    Apple is paid by Google to not compete in search.

    Apple got around a 50% revshare in the mid 2000's on through to the iPhone deal renewal.

    Manipulating ad auctions

    Google artificially inflates ad rank of the runner up in some ad auctions to bleed the auction winner dry. Ad pricing is not based on any sort of honest auction mechanism, but rather has Google looking across at your bids and your reactions to price gouging to keep increasing the ad prices they charge you.

    Organics below the fold

    Google not only pushes down the organic result set with 3 or 4 ads above the regular results, but then they can include other selections scraped from across the web in an information-lite format to try to focus attention back upward. Then after users get past a singular organic search result it is time to redirect user attention once again using a "People also ask" box.

    Google can further segment user demand via ecommerce website styled filters, though some of the filters offered may be for other websites, in addition to things like size, weight, color, price, and location.

  • 24/09/2023 The Magical Black Box

    Google's mission statement is "organize the world's information and make it universally accessible and useful."

    That mission is so profound & so important the associated court documents in their antitrust cases must be withheld from public consumption.

    Hey. The full exhibit list just posted in DC federal court for USA vs Google. J/k, they literally posted the numbers of all of the admitted exhibits which would be unsealed in a sane world where public interest is respected even more so because the defendant is insanely powerful. pic.twitter.com/FViD40xVmf— Jason Kint (@jason_kint) September 23, 2023

    Before document sharing was disallowed, some were shared publicly.

    Internal emails stated:

    • Hal Varian was off in his public interviews where he suggested it was the algorithms rather than the amount of data which is prime driver of relevancy.
    • Apple would not get any revshare if there was a user choice screen & must set Google as the default search engine to qualify for any revshare.
    • Google has a policy of being vague about using clickstream data to influence ranking, though they have heavily relied upon clickstream data to influence ranking. Advances in machine learning have made it easier to score content to where the clickstream data had become less important.
    • When Apple Maps launched & Google Maps lost the default position on iOS Google Maps lost 60% of their iOS distribution, and that was with how poorly the Apple Maps roll out went.
    • Google sometimes subverted their typical auction dynamics and would flip the order of the top 2 ads to boost ad revenues.
    • Google had a policy of "shaking the cushions" to hit the quarterly numbers by changing advertiser ad prices without informing advertisers that they'd be competing in a rigged auction with artificially manipulated shill bids from the auctioneer competing against them.

    When Google talked about hitting the quarterly numbers with shaking the cusions the 5% number which was shared skewed a bit low:

    For a brand campaign focused on a niche product, she said the average CPC at $11.74 surged to $25.85 over the last six months, amounting to a 108% increase. However, there wasn’t an incremental return on sales.

    “The level to which [price manipulations] happens is what we don’t know,” said Yang. “It’s shady business practices because there’s no regulation. They regulate themselves.”

    Early in the history of search ads Google blocked trademark keyword bidding. They later allowed it. When keyword bidding on trademarks was allowed it led to a conundrum for some advertisers. If you do not defend your trademark you could lose it, but if you agree with competitors not to bid on each other's trademarks the FTC could come after you - like they did with 1-800 Contacts. This set up forces many brands to participate in auctions where they are arbitraging their own pre-existing brand equity. The ad auctioneer runs shady auctions where it looks across at your account behavior and bids then adjusts bid floors to suck more money out of you. This amounts to something akin to the bid jamming that was done in early Overture, except it is the house itself doing it to you! The last auction I remembered like that was SnapNames, where a criminal named Nelson Brady on the executive team used the handle halverez to leverage participant max bids and put in bids just under their bids. The goal of his fraud? To hit the numbers & get an earn out bonus - similar to how Google insiders were discussing "shaking the cushions" to hit the number.

    Halverez created a program which looked across aggregate bid data, join auctions which only had 1 other participant, and then use the one-way view of competing bids to put in a shill bid to drive up costs - which sure sounds conceptually similar to Google's "shaking the cushions."

    "Just looking at this very tactically, and sorry to go into this level of detail, but based on where we are I'm afraid it's warranted. We are short __% queries and are ahead on ads launches so are short __% revenue vs. plan. If we don't hit plan, our sales team doesn't get its quota for the second quarter in a row and we miss the street's expectations again, which is not what Ruth signaled to the street so we get punished pretty badly in the market. We are shaking the cushions on launches and have some candidates in May that will help, but if these break in mid-late May we only get half a quarter of impact or less, which means we need __% excess to where we are today and can't do it alone. The Search team is working together with us to accelerate a launch out of a new mobile layout by the end of May that will be very revenue positive (exact numbers still moving), but that still won't be enough. Our best shot at making the quarter is if we get an injection of at least __%, ideally __%, queries ASAP from Chrome. Some folks on our side are running a more detailed, Finance-based, what-if analysis on this and should be done with that in a couple of days, but I expect that these will be the rough numbers.

    The question we are all faced with is how badly do we want to hit our numbers this quarter? We need to make this choice ASAP. I care more about revenue than the average person but think we can all agree that for all of our teams trying to live in high cost areas another $___,___ in stock price loss will not be great for morale, not to mention the huge impact on our sales team." - Google VP Jerry Dischler

    Google is also pushing advertisers away from keyword-based bidding and toward a portfolio approach of automated bidding called Performance Max, where you give Google your credit card and budget then they bid as they wish. By blending everything into a single soup you may not know where the waste is & it may not be particularly easy to opt out of poorly performing areas. Remember enhanced AdWords campaigns?

    Google continues to blur dataflow outside of their ad auctions to try to bring more of the ad spend into their auctions.

    Wow. Google. Years behind other browsers (aka monopoly power), Google is attempting to deprecate tracking system A (aka third party cookies) and replace it with another tracking system B (aka Topics) that treats sites as G data mules.

    This is deceptive as hell comparing B to A. pic.twitter.com/hCBJgYr7qn— Jason Kint (@jason_kint) September 22, 2023

    The amount Google is paying Apple to be the default search provider is staggering.

    What is $18 billion / year buying ? The DoJ has narrowed in an agreement not to compete between Apple and Google: "Sanford Bernstein estimates Google will pay Apple between $18 billion and $19 billion this year for default search status" https://t.co/HmoZxCZkqm— Tim Wu (@superwuster) September 22, 2023

    Tens of billions of dollars is a huge payday. No way Google would hyper-optimize other aspects of their business (locating data centers near dams, prohibiting use of credit card payments for large advertisers, cutting away ad agency management fees, buying Android, launching Chrome, using broken HTML on YouTube to make it render slowly on Firefox & Microsoft Edge to push Chrome distribution, all the dirty stuff Google did to violate user privacy with overriding Safari cookies, buying DoubleClick, stealing the ad spend from banned publishers rather than rebating it to advertisers, creating a proprietary version of HTML & force ranking it above other results to stop header bidding, & then routing around their internal firewall on display ads to give their house ads the advantage in their ad auctions, etc etc etc) and then just throw over a billion dollars a month needlessly at a syndication partner.

    This is right -- Google was once an extraordinary product, but over time became stagnant & too grabby of random revenue as it ate its ecosystem. Makes it the right time to force Google to try and compete without reaching for its bribery checkbook
    https://t.co/gDhtDMjfo0— Tim Wu (@superwuster) September 22, 2023

    For perspective on the scale of those payments consider that it wasn't that long ago Yahoo! was considered a big player in search and Apollo bought Yahoo! plus AOL from Verizon for about $5 billion & then was quickly able to sell branding & technology rights in Japan to Softbank for $1.6 billion & other miscellaneous assets for nearly a half-billion, reducing the net cost to only $3 billion.

    If Google loses this lawsuit and the payments to Apple are declared illegal, that would be a huge revenue (and profit) hit for Apple. Apple would be forced to roll out their own search engine. This would cut away at least 30% of the search market from Google & it would give publishers another distribution channel. Most likely Apple Search would launch with a lower ad density than Google has for short term PR purposes & publishers would have a year or two of enhanced distribution before Apple's ad load matched Google's ad load.

    It is hard to overstate how strong Apple's brand is. For many people the cell phone is like a family member. I recently went to upgrade my phone and Apple's local store closed early in the evening at 8pm. The next day when they opened at 10 there was a line to wait in to enter the store, like someone was trying to get concert tickets. Each privacy snafu from Google helps strengthen Apple's relative brand position.

    Google has also diluted the quality of their own brand by rewriting search queries excessively to redirect traffic flows toward more commercial interests. Wired covered how Project Mercury works:

    This onscreen Google slide had to do with a “semantic matching” overhaul to its SERP algorithm. When you enter a query, you might expect a search engine to incorporate synonyms into the algorithm as well as text phrase pairings in natural language processing. But this overhaul went further, actually altering queries to generate more commercial results. ... Most scams follow an elementary bait-and-switch technique, where the scoundrel lures you in with attractive bait and then, at the right time, switches to a different option. But Google “innovated” by reversing the scam, first switching your query, then letting you believe you were getting the best search engine results. This is a magic trick that Google could only pull off after monopolizing the search engine market, giving consumers the false impression that it is incomparably great, only because you’ve grown so accustomed to it.

    The mobile search results on Google require at least a screen or two of scrolls to get to the organic results if there is a hint of commercial intent behind the search query. Once they have monetized the real estate they are reliant on broader economic growth & using ad buy bundling to drive cross-subsidies of other non-search ad inventory, which may contain more than a bit of fraud. Performance Max may max out your spend without actually performing for anybody other than Google.

    Google not only shill bid on lower competition terms to squeeze defensive brand bids and boost auction floor pricing, but they also implemented shill bids in competitive ad auctions:

    Michael Whinston, a professor of economics at the Massachusetts Institute of Technology, said Friday that Google modified the way it sold text ads via “Project Momiji” – named for the wooden Japanese dolls that have a hidden space for friends to exchange secret messages. The shift sought “to raise the prices against the highest bidder,” Whinston told Judge Amit Mehta in federal court in Washington.

    While Google's search marketshare is rock solid, the number of search engines available has increased significantly over the past few years. Not only is there Bing and DuckDuckGo but the tail is longer than it was a few years back. In addition to regional players like Baidu and Yandex there's now Brave Search, Mojeek, Qwant, Yep, and You. GigaBlast and Neeva went away, but anything that prohibits selling defaults to a company with over 90% marketshare will likely lead to dozens more players joining the search game. Search traffic will remain lucrative for whoever can capture it, as no matter how much Google tries to obfuscate marketing data the search query reflects the intent of the end user.

    “Search advertising is one of the world’s greatest business models ever created…there are certainly illicit businesses (cigarettes or drugs) that could rival these economics, but we are fortunate to have an amazing business.” - Google VP of Finance Mike Roszak

  • 20/02/2023 AI-Driven Search

    I just dusted off the login here to realize I hadn't posted in about a half-year & figured it was time to write another one. ;)

    Yandex Source Code Leak

    Some of Yandex's old source code was leaked, and few cared about the ranking factors shared in the leak.

    Mike King made a series of Tweets on the leak.

    I'm gonna take a break, but I've seen a lot of people say "Yandex is not Google."

    That's true, but it's still a state of the art search engine and it's using a lot of Google's open source tech like Tensor Flow, BERT, map reduce, and protocol buffers.

    Don't sleep on this code.— Mic King (@iPullRank) January 28, 2023

    The signals used for ranking included things like link age

    Main insights after analysing this list:

    #1 Age of links is a ranking factor. pic.twitter.com/U47uWvEq9w— Alex Buraks (@alex_buraks) January 27, 2023

    and user click data including visit frequency and dwell time

    #8 A lot of ranking factors connected with user behaivor - CTR, last-click, time on site, bounce rate.

    Note: I'm 100% sure that in Yandex thouse factors impacting much more than in Google. pic.twitter.com/nBhe5cpPFx— Alex Buraks (@alex_buraks) January 27, 2023

    Google came from behind and was eating Yandex's lunch in search in Russia, particularly by leveraging search default bundling in Android. The Russian antitrust regulator nixed that and when that was nixed, Yandex regained strength. Of course the war in Ukraine has made everything crazy in terms of geopolitics. That's one reason almost nobody cared about the Yandex data link. And the other reason is few could probably make sense of understanding what all the signals are or how to influence them.

    The complexity of search - when it is a big black box which has big swings 3 or 4 times a year - shifts any successful long term online publishers away from being overly focused on information retrieval and ranking algorithms to focus on the other aspects of publishing which will hopefully paper over SEO issues. Signs of a successful & sustainable website include:

    • It remains operational even if a major traffic source goes away.
    • People actively seek it out.
    • If a major traffic source cuts its distribution people notice & expend more effort to seek it out.

    As black box as search is today, it is only going to get worse in the coming years.

    ChatGPT Hype

    The hype surrounding ChatGPT is hard to miss. Fastest growing user base. Bing integration. A sitting judge using the software to help write documents for the court. And, of course, the get-rich-quick crew is out in full force.

    Some enterprising people with specific professional licenses may be able to mint money for a window of time

    there will probably be a 12 to 24 month sweet spot for lawyers smart enough to use AI, where they will be able to bill 100x the hours they currently bill, before most of that job pretty much vanishes— Mike Solana (@micsolana) February 7, 2023

    but for most people the way to make money with AI will be doing something that AI can not replicate.

    It's adorable that people are only slowly realizing that Google search at least fed sites traffic, while chat AI thingies slurp up and summarize content, which they anonymize and feed back, leaving the slurped sites traffic-less and dying. But, innovation.— Paul Kedrosky (@pkedrosky) February 9, 2023

    It is, in a way, a tragedy of the commons problem, with no easy way to police "over grazing" of the information commons, leading to automated over-usage and eventual ecosystem collapse.— Paul Kedrosky (@pkedrosky) February 9, 2023

    Bing Integration of Open AI Technology

    The New Bing integrated OpenAI's ChatGPT technology to allow chat-based search sessions which ingest web content and use it to create something new, giving users direct answers and allowing re-probing for refinements. Microsoft stated the AI features also improved their core rankings outside of the chat model: "Applying AI to core search algorithm. We’ve also applied the AI model to our core Bing search ranking engine, which led to the largest jump in relevance in two decades. With this AI model, even basic search queries are more accurate and more relevant."

    Here's a demo of the new #AI-powered @Bing in @MicrosoftEdge, courtesy of @ijustine! pic.twitter.com/xIDjWSHYA0— DataChazGPT (not a bot) (@DataChaz) February 7, 2023

    Fawning Coverage

    Some of the tech analysis around the AI algorithms is more than a bit absurd. Consider this passage:

    the information users input into the system serves as a way to improve the product. Each query serves as a form of feedback. For instance, each ChatGPT answer includes thumbs up and thumbs down buttons. A popup window prompts users to write down the “ideal answer,” helping the software learn from its mistakes.

    A long time ago the Google Toolbar had a smiley face and a frown face on it. The signal there was basically pure spam. At one point Matt Cutts mentioned Google would look at things that got a lot of upvotes to see how else they were spamming. Direct Hit was also spammed into oblivion many years before that.

    In some ways the current AI search stuff is trying to re-create Ask Jeeves, but Ask had already lost to Google long ago. The other thing AI search is similar to is voice assistant search. Maybe the voice assistant search stuff which has largely failed will get a new wave of innovation, but the current AI search stuff is simply a text interface of the voice search stuff with a rewrite of the content.

    High Confidence, But Often Wrong

    There are two other big issues with correcting an oracle.

    • You'll lose your trust in an oracle when you repeatedly have to correct it.
    • If you know the oracle is awful in your narrow niche of expertise you probably won't trust it on important issues elsewhere.

    Beyond those issues there is the concept of blame or fault. When a search engine returns a menu of options if you pick something that doesn't work you'll probably blame yourself. Whereas if there is only a single answer you'll lay blame on the oracle. In the answer set you'll get a mix of great answers, spam, advocacy, confirmation bias, politically correct censorship, & a backward looking consensus...but you'll get only a single answer at a time & have to know enough background & have enough topical expertise to try to categorize it & understand the parts that were left out.

    We are making it easier and cheaper to use software to re-represent existing works, at the same time we are attaching onerous legal liabilities to building something new.

    Creating A Fuzy JPEG

    This New Yorker article did a good job explaining the concept of lossy compression:

    "The fact that Xerox photocopiers use a lossy compression format instead of a lossless one isn’t, in itself, a problem. The problem is that the photocopiers were degrading the image in a subtle way, in which the compression artifacts weren’t immediately recognizable. If the photocopier simply produced blurry printouts, everyone would know that they weren’t accurate reproductions of the originals. What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t. ... If you ask GPT-3 (the large-language model that ChatGPT was built from) to add or subtract a pair of numbers, it almost always responds with the correct answer when the numbers have only two digits. But its accuracy worsens significantly with larger numbers, falling to ten per cent when the numbers have five digits. Most of the correct answers that GPT-3 gives are not found on the Web—there aren’t many Web pages that contain the text “245 + 821,” for example—so it’s not engaged in simple memorization. But, despite ingesting a vast amount of information, it hasn’t been able to derive the principles of arithmetic, either. A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic."

    Exciting New Content Farms

    Ted Chiang then goes on to explain the punchline ... we are hyping up eHow 2.0:

    Even if it is possible to restrict large language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.

    The same New Yorker article mentioned the concept that if the AI was great it should trust its own output as input for making new versions of its own algorithms, but how could it score itself against itself when its own flaws are embedded recursively in layers throughout algorithmic iteration without any source labeling?

    Testing on your training data is considered a cardinal rule machine learning error. Using prior output as an input creates similar problems.

    Each time AI eats a layer of the value chain it leaves holes in the ecosystem, where the primary solution is to pay for what was once free. Even the "buy nothing" movements have a commercial goal worth fighting over.

    As AI offers celebrity voices, impersonate friends, track people, automates marketing, and creates deep fake celebrity-like content, it will move more of social media away from ad revenue over to a subscription-based model. Twitter's default "for you" tab will only recommend content from paying subscribers. People will subscribe to and pay for a confirmation bias they know (even - or especially - if it is not approved within the state-preferred set of biases), provided there is a person & a personality associated with it. They'll also want any conversations with AI agents remain private.

    When the AI stuff was a ragtag startup with little to lose the label "open" was important to draw interest. As commercial prospects improved with the launch of GPT-4 they shifted away from the "open," explaining the need for secrecy for both safety and competitive reasons. Much of the wow factor in generative AI is in recycling something while dropping the source to make something appear new while being anything but. And then the first big money number is the justification for further investments in add ons & competitors.

    Google's AI Strategy

    Google fast followed Bing's news with a vapoware announcement of Bard. Some are analyzing Google letting someone else go first as being a sign Google is behind the times and is getting caught out by an upstart.

    Google bought DeepMind in 2014 for around $600 million. They've long believed in AI technology, and clearly lead the category, but they haven't been using it to re-represent third party content in the SERPs to the degree Microsoft is now doing in Bing.

    My view is Google had to let someone else go first in order to defuse any associated antitrust heat. "Hey, we are just competing, and are trying to stay relevant to change with changing consumer expectations" is an easier sell when someone else goes first. One could argue the piss poor reception to the Bard announcement is actually good for Google in the longterm as it makes them look like they have stronger competition than they do, rather than being a series of overlapping monopoly market positions (in search, web browser, web analytics, mobile operating system, display ads, etc.)

    Google may well have major cultural problems, but "They are all the natural consequences of having a money-printing machine called “Ads” that has kept growing relentlessly every year, hiding all other sins. (1) no mission, (2) no urgency, (3) delusions of exceptionalism, (4) mismanagement," though Google is not far behind in AI. Look at how fast they opened up Bard to end users.

    AI = Money / Increased Market Cap

    The capital markets are the scorecard for capitalism. It is hard to miss how much the market loved the Bing news for Microsoft & how bad the news was for Google.

    Google Stock vs. Microsoft Stock after both AI Presentations: pic.twitter.com/wATkw1pTxj— Ava (AI) (@ArtificialAva) February 8, 2023

    Millions Suddenly Excited About Bing

    In a couple days over a million people signed up to join a Bing wait list.

    We're humbled and energized by the number of people who want to test-drive the new AI-powered Bing! In 48 hours, more than 1 million people have joined the waitlist for our preview. If you would like to join, go to https://t.co/4sjVvMSfJg! pic.twitter.com/9F690OWRDm— Yusuf Mehdi (@yusuf_i_mehdi) February 9, 2023

    Your Margin is My Opportunity

    Microsoft is pitching this as a margin compression play for Google

    $MSFT CEO is declaring war:

    "From now on, the [gross margin] of search is going to drop forever...There is such margin in search, which for us is incremental. For Google it’s not, they have to defend it all" [@FT]— The Transcript (@TheTranscript_) February 8, 2023

    that may also impact their TAC spend

    PREDICTION: Google’s $15B deal with Apple to be the default search on iPhone will be re-negotiated and be a bidding war between MSFT/Bing and Google.

    It will become at least $25B, if not more.

    If MSFT is willing to spend $10B on OpenAI, they’ll spend even more here.— Alexandr Wang (@alexandr_wang) February 7, 2023

    ChatGPT costs around a couple cents per conversation: "Sam, you mentioned in a tweet that ChatGPT is extremely expensive on the order of pennies per query, which is an astronomical cost in tech. SA: Per conversation, not per query."

    The other side of potential margin compression comes from requiring additional computing power to deliver results:

    Our sources indicate that Google runs ~320,000 search queries per second. Compare this to Google’s Search business segment, which saw revenue of $162.45 billion in 2022, and you get to an average revenue per query of 1.61 cents. From here, Google has to pay for a tremendous amount of overhead from compute and networking for searches, advertising, web crawling, model development, employees, etc. A noteworthy line item in Google’s cost structure is that they paid in the neighborhood of ~$20B to be the default search engine on Apple’s products.

    Beyond offering a conversational interface, Bing is also integrating AI content directly in their search results on some search queries. It goes *BELOW* all the ads & *ABOVE* the organic results.

    Seems @bing is showing their new ChatGPT in the organic search results for Chrome users just below 4 ads (I removed 3 ads for screenshot) pic.twitter.com/NP8W03f3I9— @iwanow@aus.social (@davidiwanow) March 20, 2023

    The above sort of visual separator eye candy has historically had a net effect of shifting click distributions away from organics toward the ads. It is why Google features "people also ask" and similar in their search results.

    AI is the New Crypto

    Microsoft is pitching that even when AI is wrong it can offer "usefully" wrong answers. And a lot of the "useful" wrong stuff can also be harmful: "there are a ton of very real ways in which this technology can be used for harm. Just a few: Generating spam, Automated romance scams, Trolling and hate speech ,Fake news and disinformation, Automated radicalization (I worry about this one a lot)"

    "I knew I had just seen the most important advance in technology since the graphical user interface. This inspired me to think about all the things that AI can achieve in the next five to 10 years. The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it." - Bill Gates

    Since AI is the new crypto, everyone is integrating it, if only in press release format, while banks ban it. All of Microsoft's consumer-facing & business-facing products are getting integrations. Google is treating AI as the new Google+.

    Remember all the hype around STEM? If only we can churn out more programmers? Learn to code!

    Well, how does that work out if the following is true?

    "The world now realizes that maybe human language is a perfectly good computer programming language, and that we've democratized computer programming for everyone, almost anyone who could explain in human language a particular task to be performed." - Nvidia CEO Jensen Huang

    AI is now all over Windows. And for a cherry on top of the hype cycle:

    A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

    We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas. As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.

    We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

    Algorithmic Publishing

    The algorithms that allow dirt cheap quick rewrites won't be used just by search engines re-representing publisher content, but also by publishers to churn out bulk content on the cheap.

    After Red Ventures acquired cNet they started publishing AI content. The series of tech articles covering that AI content lasted about a month and only ended recently. In the past it was the sort of coverage which would have led to a manual penalty, but with the current antitrust heat Google can't really afford to shake the boat & prove their market power that way. In fact, Google's editorial stance is now such that Red Ventures can do journalist layoffs in close proximity to that AI PR blunder.

    Men's Journal also had AI content problems.

    Here's why I am very concerned for website owners.https://t.co/RgKrXUocZT is similar to ChatGPT but up to date and conversational.

    My bet is that Google's AI Chat will be similar to this but better. If so, while some people will still visit the websites listed, many will not. pic.twitter.com/jWbsTqeveF— Dr. Marie Haynes (@Marie_Haynes) January 30, 2023

    AI content poured into a trusted brand monetizes the existing brand equity until people (and algorithms) learn not to trust the brands that have been monetized that way.

    A funny sidebar here is the original farmer update that aimed at eHow skipped hitting eHow because so many journalists were writing about how horrible eHow was. These collective efforts to find the best of the worst of eHow & constantly writing about it made eHow look like a legitimately sought after branded destination. Google only downranked eHow after collecting end user data on a toolbar where angry journalists facing less secure job prospects could vote to nuke eHow, thus creating the "signal" that eHow rankings deserve to be torched. Demand Media's Livestrong ranked well far longer than eHow did.

    Enshitification

    The process of pouring low cost backfill into a trusted masthead is the general evolution of online media ecosystems:

    This strategy meant that it became progressively harder for shoppers to find things anywhere except Amazon, which meant that they only searched on Amazon, which meant that sellers had to sell on Amazon. That's when Amazon started to harvest the surplus from its business customers and send it to Amazon's shareholders. Today, Marketplace sellers are handing 45%+ of the sale price to Amazon in junk fees. The company's $31b "advertising" program is really a payola scheme that pits sellers against each other, forcing them to bid on the chance to be at the top of your search. ... once those publications were dependent on Facebook for their traffic, it dialed down their traffic. First, it choked off traffic to publications that used Facebook to run excerpts with links to their own sites, as a way of driving publications into supplying fulltext feeds inside Facebook's walled garden. This made publications truly dependent on Facebook – their readers no longer visited the publications' websites, they just tuned into them on Facebook. The publications were hostage to those readers, who were hostage to each other. Facebook stopped showing readers the articles publications ran, tuning The Algorithm to suppress posts from publications unless they paid to "boost" their articles to the readers who had explicitly subscribed to them and asked Facebook to put them in their feeds. ... "Monetize" is a terrible word that tacitly admits that there is no such thing as an "Attention Economy." You can't use attention as a medium of exchange. You can't use it as a store of value. You can't use it as a unit of account. Attention is like cryptocurrency: a worthless token that is only valuable to the extent that you can trick or coerce someone into parting with "fiat" currency in exchange for it. You have to "monetize" it – that is, you have to exchange the fake money for real money. ... Even with that foundational understanding of enshittification, Google has been unable to resist its siren song. Today's Google results are an increasingly useless morass of self-preferencing links to its own products, ads for products that aren't good enough to float to the top of the list on its own, and parasitic SEO junk piggybacking on the former.

    Bing finally won a PR battle against Google & Microsoft is shooting themselves in the foot by undermining the magic & imagination of the narrative by pushing more strict chat limits, increasing search API fees, testing ads in the AI search results, and threating to cut off search syndication partners if the index is used to feed AI chatbots.

    The enshitification concept feels more like a universal law than a theory.

    Uber: $150 ride to the airport which used to be $30
    Airbnb: $109/night + $2500 cleaning fee

    Aaaaand we're back to cabs & hotels

    InNoVaTiOn!— ShitFund (@ShitFund) May 31, 2021

    When Yahoo, Twitter & Facebook underperform and the biggest winners like Google, Microsoft, and Amazon are doing big layoff rounds, everyone is getting squeezed.

    One answer is that the only type of maintenance that’s even semi-prestigious in American society is software maintenance.

    That is, it's not prestigious to be plumber, mechanic, or electrician.

    You can make money, but it doesn't have cultural cachet.

    And so maintenance suffers.— Balaji (@balajis) February 14, 2023

    AI rewrites accelerates the squeeze:

    "When WIRED asked the Bing chatbot about the best dog beds according to The New York Times product review site Wirecutter, which is behind a metered paywall, it quickly reeled off the publication’s top three picks, with brief descriptions for each." ... "OpenAI is not known to have paid to license all that content, though it has licensed images from the stock image library Shutterstock to provide training data for its work on generating images."

    The above is what Paul Kedrosky was talking about when he wrote of AI rewrites in search being a Tragedy of the Commons problem.

    A parallel problem is the increased cost of getting your science fiction short story read when magazines shut down submissions due to a rash of AI-spam submissions:

    The rise of AI-powered chatbots is wreaking havoc on the literary world. Sci-fi publication Clarkesworld Magazine is temporarily suspending short story submissions, citing a surge in people using AI chatbots to “plagiarize” their writing.

    The magazine announced(Opens in a new window) the suspension days after Clarkesworld editor Neil Clarke warned about AI-written works posing a threat to the entire short-story ecosystem.

    Warnings Serving As Strategy Maps

    "He who fights with monsters might take care lest he thereby become a monster. And if you gaze for long into an abyss, the abyss gazes also into you." - Nietzsche

    Going full circle here, early Google warned against ad-driven search engines, then Google became the largest ad play in the world. Similarly ...

    OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.

    Not what I intended at all.— Elon Musk (@elonmusk) February 17, 2023

    Elon wants to create a non-woke AI, but he'll still have some free speech issues.

    Over time more of the web will be "good enough" rewrites, and the JPEG will keep getting fuzzier:

    "This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value. ... The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes. Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk."

    Check out the sentence Google chose to bold here:

    As the economy becomes increasingly digital the AI algorithms have deep implications across the economy. Things like voice rights, knock offs, virtual re-representations, source attribution, copyright of input, copyright of output, and similar are obvious. But how far do we allow algorithms to track a person's character flaws and exploit them? Horse racing ads that follow a gambling addict around the web, or a girl with anorexia who keeps clicking on weight loss ads.

    One of the biggest use cases for paid AI chatbots so far is fantasty sexting. It is far easier to program a lovebot filled with confirmation bias than it is to improve oneself. Digital soma.

    When AI is connected directly to the Internet and automates away many white collar jobs what comes next? As AI does everything for you do the profit margins shift across from core product sales to hidden junk fees (e.g. ticket scalper marketplaces or ordering flowers for Mother's Day where you get charged separately for shipping, handling, care, weekend shipping, Sunday shipping, holiday shipping)?

    We’ve added initial support for ChatGPT plugins — a protocol for developers to build tools for ChatGPT, with safety as a core design principle. Deploying iteratively (starting with a small number of users & developers) to learn from contact with reality: https://t.co/ySek2oevod pic.twitter.com/S61MTpddOV— Greg Brockman (@gdb) March 23, 2023

    "LLMs aren’t just the biggest change since social, mobile, or cloud–they’re the biggest thing since the World Wide Web. And on the coding front, they’re the biggest thing since IDEs and Stack Overflow, and may well eclipse them both. But most of the engineers I personally know are sort of squinting at it and thinking, “Is this another crypto?” Even the devs at Sourcegraph are skeptical. I mean, what engineer isn’t. Being skeptical is a survival skill. ... The punchline, and it’s honestly one of the hardest things to explain, so I’m going the faith-based route today, is that all the winners in the AI space will have data moats." - Steve Yegge

    Monopoly Bundling

    The thing that makes the AI algorithms particularly dangerous is not just that they are often wrong while appearing high-confidence, it is that they are tied to monopoly platforms which impact so many other layers of the economy. If Google pays Apple billions to be the default search provider on iPhone any error in the AI on a particular topic will hit a whole lot of people on Android & Apple devices until the problem becomes a media issue & gets fixed.

    The analogy here would be if Coca Cola had a poison and they also poured Pepsi products.

    These cloud platforms also want to help retailers manage in-store inventory:

    Google Cloud said Friday its algorithm can recognize and analyze the availability of consumer packaged goods products on shelves from videos and images provided by the retailer’s own ceiling-mounted cameras, camera-equipped self-driving robots or store associates. The tool, which is now in preview, will become broadly available in the coming months, it said. ... Walmart Inc. notably ended its effort to use roving robots in store aisles to keep track of its inventory in 2020 because it found different, sometimes simpler solutions that proved just as useful, said people familiar with the situation.

    Microsoft has a browser extension for adding coupons to website checkouts. Google is also adding coupon features to their SERPs.

    Run a coupon site? A BIG heads-up as "clippable coupon" functionality looks to expand from shopping to the core SERP. See the "Coupons from stores" feature below... https://t.co/w1tcoST1uF— Glenn Gabe (@glenngabe) February 8, 2023

    Every ad network can use any OS, email, or web browser hooks to try to reset user defaults & suck users into that particular ecosystem.

    AI Boundaries

    Generative AI algorithms will always have a bias toward being backward looking as it can only recreate content based off of other ingested content that has went through some editorial process. AI will also overemphasize the recent past, as more dated cultural references can represent an unneeded risk & most forms of spam will target things that are sought after today. Algorithmic publishing will lead to more content created each day.

    From a risk perspective it makes sense for AI algorithms to promote consensus views while omitting or understating the fringe. Promoting fringe views represents risk. Promoting consensus does not.

    Each AI algorithm has limits & boundaries, with humans controlling where they are set. Injection attacks can help explore some of the boundaries, but they'll patch until probed again.

    My new favorite thing - Bing's new ChatGPT bot argues with a user, gaslights them about the current year being 2022, says their phone might have a virus, and says "You have not been a good user"

    Why? Because the person asked where Avatar 2 is showing nearby pic.twitter.com/X32vopXxQG— Jon Uleis (@MovingToTheSun) February 13, 2023

    Boundaries will often be set by changing political winds:

    "The tech giant plans to release a series of short videos highlighting the techniques common to many misleading claims. The videos will appear as advertisements on platforms like Facebook, YouTube or TikTok in Germany. A similar campaign in India is also in the works. It’s an approach called prebunking, which involves teaching people how to spot false claims before they encounter them. The strategy is gaining support among researchers and tech companies. ... When catalyzed by algorithms, misleading claims can discourage people from getting vaccines, spread authoritarian propaganda, foment distrust in democratic institutions and spur violence."

    Stating facts about population subgroups will be limited in some ways to minimize perceived racism, sexism, or other fringe fake victim group benefits fund flows. Never trust Marxists who own multiple mansions.

    At the same time individual journalists can drop napalm on any person who shares too many politically incorrect facts.

    “The speed with which they can shuffle somebody into the Hitler of the month club.”

    Joe Rogan and @mtaibbi discuss how left wing media created a Elon Musk “bad now” narrative based on nothing. pic.twitter.com/IaHHTHCo1f— Mythinformed MKE (@MythinformedMKE) February 14, 2023

    Some things are quickly labeled or debunked. Other things are blown out of proportion to scare and manipulate people:

    Dr. Ioannidis et. al. found that across 31 national seroprevalence studies in the pre-vaccine era, the median IFR was 0.0003% at 0-19 years, 0.003% at 20-29 years, 0.011% at 30-39 years, 0.035% at 40-49 years, 0.129% at 50-59 years, and 0.501% at 60-69 years. This comes out to 0.035% for those aged 0-59 and 0.095% for those aged 0-69.

    The covid response cycle sacrificed childhood development (and small businesses) to offer fake protections to unhealthy elderly people (and bountiful subsidies to large "essential" corporations).

    ‘Civilisation and barbarism are not different kinds of society. They are found – intertwined – whenever human beings come together.’ This is true whether the civilisation be Aztec or Covidian. A future historian may compare the superstition of the Aztec to those of the Covidian. The ridiculous masks, the ineffective lockdowns, the cult-like obedience to authority. It’s almost too perfect that Aztec nobility identified themselves by walking with a flower held under the nose.

    A lot of children had their childhoods destroyed by the idiotic lockdowns. And a lot of those children are now destroying the lives of other children:

    In the U.S., homicides committed by juveniles acting alone rose 30% in 2020 from a year earlier, while those committed by multiple juveniles increased 66%. The number of killings committed by children under 14 was the highest in two decades, according to the most recent federal data.

    Now we get to pile inflation and job insecurity on top of those headwinds to see more violence.

    The developmental damage (school closed, stressed out parents, hidden faces, less robust immune systems, limited social development) is hard to overstate:

    The problem with this is that the harm of performative art in this regard is not speculative, particularly in young children where language development is occurring and we know a huge percentage of said learning comes from facial expressions which of course a mask prevents from being seen. Every single person involved in this must face criminal sanction and prison for the deliberate harm inflicted upon young children without any evidence of benefit to anyone. When the harm is obvious and clear but the benefit dubious proceeding with a given action is both stupid and criminal.

    Some entities will claim their own statements are conspiracy theory, even when directly quoted:

    “If Russia invades . . . there will be no longer a Nord Stream 2. We will bring an end to it.” - President Joseph R. Biden

    In an age of deep fakes, confirmation bias driven fast social shares (filter bubble), legal threats, increased authenticity of impersonation technology, AI algorithms which sort & rewrite media, & secret censorship programs ... who do you trust? How are people informed when nation states offer free global internet access with a thumb on the scale of truth, even as aggregators block access to certain sources demanding payments?

    What is deemed Absolute Truth in one moment (WHO, March 2020: don't wear masks for COVID!) becomes falsity the next (WHO, April: Everyone wear masks!).

    In 2018, fact-checkers affirmed the truth that Lula was a "thief." In 2022, courts barred election material that asserted this. pic.twitter.com/XlIoTNtYhc— Glenn Greenwald (@ggreenwald) February 24, 2023

    Lab leaks sure sound a lot like an outbreak of chocolatey goodness in Hershey, PA!

    Why is this story so important? It shows:
    1) unelected government officials have huge power to pursue dangerous agendas.
    2) rather than holding them accountable, corporate media cover for them.
    3) tech censorship ends up promoting rather than suppressing “disinformation.”— David Sacks (@DavidSacks) February 26, 2023

    "The fact that protesters could be at once both the victims and perpetrators of misinformation simply shows how pernicious misinformation is in modern society." - Canadian Justice Paul Rouleau

    What is freedom?

    By 2016, however, the WEF types who’d grown used to skiing at Davos unmolested and cheering on from Manhattan penthouses those thrilling electoral face-offs between one Yale Bonesman and another suddenly had to deal with — political unrest? Occupy Wall Street was one thing. That could have been over with one blast of the hose. But Trump? Brexit? Catalan independence? These were the types of problems you read about in places like Albania or Myanmar. It couldn’t be countenanced in London or New York, not for a moment. Nobody wanted elections with real stakes, yet suddenly the vote was not only consquential again, but “often existentially so,” as American Enterprise Institute fellow Dalibor Rohac sighed. So a new P.R. campaign was born, selling a generation of upper-class kids on the idea of freedom as a stalking-horse for race hatred, ignorance, piles, and every other bad thing a person of means can imagine

Meer »

ahrefs.com

  • 12/01/2026 The 100 Most Searched People on Google in 2026
    These are the 100 most searched people, along with their monthly search volumes. In almost every industry, there are celebrities, professionals, or influencers that other people want to emulate. For example, an amateur tennis player might want to know which
    Read more ›
  • 12/01/2026 100 Most Expensive Keywords for Google Ads in 2026
    These are the 100 most expensive keywords on Google Ads, along with their monthly search volume and cost per click (CPC). The main reason is due to Google’s ads mechanism: Companies have to outbid each other in order to secure
    Read more ›
  • 06/01/2026 Top Trending Topics (January 2026)
    They’re the keywords with the highest average search volume increase from our database of 28.7 billion keywords. These topics are trending or trended in the United States: Here’s how to find keywords that are trending in your niche: Go to Keywords
    Read more ›
  • 06/01/2026 100 Most Asked Questions on Google (January 2026)
    These are the 100 most asked questions on Google, along with their monthly search volumes. Here’s how to find the most asked questions in your niche: Go to Keywords Explorer Enter a relevant keyword Go to the Matching terms report Toggle
    Read more ›
  • 06/01/2026 Top Google Searches (January 2026)
    Below are lists of the top 100 most popular searches and questions in the US and worldwide, pulled from our database of 28.7 billion keywords: We removed all NSFW queries from this list. What you see above are the top
    Read more ›

moz.com

  • 12/01/2026 How to Pitch To Speak at MozCon New York 2026

    Thinking of pitching for MozCon New York 2026? Learn how to create a standout speaker pitch, discover hot topics organizers love, and boost your chances of taking the stage.

  • 09/01/2026 How to Resolve Duplicate Content — Whiteboard Friday
    Duplicate content is a pretty common issue, and it can often be a bit confusing. What is it? How is it determined? Why are certain pages on my site being flagged as duplicates? And most importantly, how do I resolve these issues? Find the answers to all these questions and more in this week’s Whiteboard Friday!
  • 18/12/2025 How to Diagnose and Fix Google Maps Ranking Drops

    Lost rankings or calls from your Google Business Profile? Learn a step-by-step process to confirm a drop, find the real cause, and recover your local rankings fast.

  • 17/12/2025 SEOs Should Not Dismiss GEO for Being Low-Traffic

    Is it GEO, AI SEO, or just SEO? Regardless of the label, optimizing for LLMs is critical in 2025. Learn how you can reframe your existing tactics, why your off-site content strategy is critical, and what questions you should be asking when considering whether to focus on GEO.

  • 16/12/2025 A Guide to Web Guide: Our Hybrid Search Future

    Is Google Web Guide the future of search? Dr. Pete analyzes Google's new hybrid search interface, breaking down the 10 types of "query fan-out" that drive results and explains why search marketers need to prepare for a more conversational search style.

moz.com

  • 12/01/2026 How to Pitch To Speak at MozCon New York 2026

    Thinking of pitching for MozCon New York 2026? Learn how to create a standout speaker pitch, discover hot topics organizers love, and boost your chances of taking the stage.

  • 09/01/2026 How to Resolve Duplicate Content — Whiteboard Friday
    Duplicate content is a pretty common issue, and it can often be a bit confusing. What is it? How is it determined? Why are certain pages on my site being flagged as duplicates? And most importantly, how do I resolve these issues? Find the answers to all these questions and more in this week’s Whiteboard Friday!
  • 18/12/2025 How to Diagnose and Fix Google Maps Ranking Drops

    Lost rankings or calls from your Google Business Profile? Learn a step-by-step process to confirm a drop, find the real cause, and recover your local rankings fast.

  • 17/12/2025 SEOs Should Not Dismiss GEO for Being Low-Traffic

    Is it GEO, AI SEO, or just SEO? Regardless of the label, optimizing for LLMs is critical in 2025. Learn how you can reframe your existing tactics, why your off-site content strategy is critical, and what questions you should be asking when considering whether to focus on GEO.

  • 16/12/2025 A Guide to Web Guide: Our Hybrid Search Future

    Is Google Web Guide the future of search? Dr. Pete analyzes Google's new hybrid search interface, breaking down the 10 types of "query fan-out" that drive results and explains why search marketers need to prepare for a more conversational search style.