It’s been an exciting week with important announcements from the stage at the 2019 Google I/O event. Probably the most impactful announcement is that Google has now committed to regularly updating its Googlebot crawl service to begin using the most recent stable version of their headless Chromium rendering engine. This is a significant leap forward with more than 1,000 features now supported over the previous version.
One basic example is to add a value to an array, a very common thing to do using
<script> names = [ 'Amy', 'Bruce', 'Chris' ]; names.push('David'); </script>
In the example above, an array of names is defined and assigned 3 values: Amy, Bruce, and Chris. Then David is added to the list using the
<script> let names = [ 'Amy', 'Bruce', 'Chris' ]; names = [...names, 'David']; </script>
The names array mutation above uses a newer ‘spread operator’ syntax
[...names] to represent current values of the names array, and then adds David using an assignment operation instead of the push() method. The newer syntax is not compatible with Chrome 41, and therefore would not work prior to Googlebot’s update to Chrome 74. For developers it is like death by a thousand cuts to have to write or transpile ES6 down for backward compatibility.
The Svelte framework was recently significantly updated and revised to version 3. With this major overhaul came more precisely triggered assignment-based page reactivity. There’s a fun viral video about it going around. Having to write or transpile the ‘names’ array code to older push() syntax for Google in Svelte requires an extra step because push() adds values to an array but it isn’t a variable assignment operation, which is necessary to trigger page reactivity in Svelte 3.
<script> let names = [ 'Amy', 'Bruce', 'Chris' ]; names.push('David'); names = names; // To trigger Svelte reactivity </script>
It’s easy to see why now being able to use ES6:
<script> names = [...names, 'David']; </script>
…is more developer friendly for Svelte users than before.
If you’re still tracking visits from older versions of Chrome in your server logs, eventually they will update the user-agent string to reflect the version of Chrome they are running. Also, keep in mind that Google is a fairly large and dispersed company with divisions that have varying access to its network resources. A particular department might have settings to modify in order to begin using the new Chrome engine, but it stands to reason that everything will be using it very soon, especially for critical Web crawling services.
The nice thing about being a Technical SEO is we get to advise developers about practices that should align with Googlebot and that mostly they ought to be doing in the first place. The nice thing about being a SEO Developer is there’s a never-ending river of exciting modern code to play with, especially with Google now caught up with Chromium 74. The only drawback is evergreen Chromium Googlebot doesn’t help you with Bing, DuckDuckGo, or social media sharing crawlers.
The more things change the more they stay the same. You should still advise clients about pre-rendering and SSR. This ensures that no matter what user-agent you’re dealing with, it will receive rendered content for search or sharing. The predicament we find ourselves in is that if the planned application has a huge volume of reactive parts to it, for example constantly updating sports scores or stock market prices, we must do reactivity and SSR alone won’t work.