Provide a source for that? As someone who actually lives in New Zealand that's the exact opposite of what they have said about the outbreaks genetics... iirc it looks like it might have come from the UK or Australia.
I can attest it tailors content to at _least_ location. I installed TikTok for the first time yesterday and saw content local to my country (New Zealand) pretty soon, before I signed up.
I love how big the numbers are in recent versions of Firefox. On my crappy work computer that can barely run Notepad++ I am pulling 1.8 billion ops for getElementById and getElementsByClassName. The performance gaps:
* getElementById vs equivalent querySelector is about 1000x.
* getElementsByClassName vs querySelectorAll is about 250,000x.
* On Chrome getElementsByClassName vs querySelectorAll is only about 426x.
This is a micro-benchmark. In the real world that disparity would magnify almost exponentially with the number of nodes in a given page and the frequency and depth of access requests to the DOM.
I decided to check back in this and this seems like it was entirely a wording problem in your first post. Your post implied that _parsing the query selector string_ was slowing down Javascript websites, which obviously made everyone do a hottake and jump in to say that can't be the case. What you seem to _actually_ be saying is that using query selectors instead of finding elements directly by class/id is slow which yes, that's certainly the case.
Query selectors are slow, yes, but its because they have to parse a string. Other DOM methods don't have a string parsing step. You cannot optimize access to the DOM if there is a string parsing step that must occur first.
That barrier to efficiency is greatly magnified by the complexity of the query string, the size of the dynamically rendered page, and the number of query strings. If not for that string parsing step why would query selectors be any different from any other DOM access instruction computationally?
You're quite frankly leaping to conclusions quite a bit, as well as starting from the (rather flawed) premise that parsing a, what, ten-character string is so slow that it can slow an operation down by three orders of magnitude.
There are any number of reasons why the querySelector API would be computationally different from the getElementX ones, especially considering that they don't even return the same thing.
> They return exactly the same thing: either null or a node or node list depending upon the method in question
...this is another thing that I'm surprised to find that people don't know. Do devs really just use these methods without ever looking at what they return or how they behave?
getElementsByClassName returns a (live) HTMLCollection, not a NodeList. querySelectorAll returns a (static) NodeList. That in itself is an obvious computational difference/potential bottleneck, because the simplest way to implement a live collection is to cache and return the same object on subsequent calls for it. And that's precisely what browsers do (getElementsByClassName('foo') === getElementsByClassName('foo')). In other words, getElementsByClassName called multiple times with the same class doesn't actually do any extra work. The real work for the browser engine, which doesn't actually happen at the point of function call, is watching any tracked class names and updating their associated HTMLCollections when an element matching that class name is added or removed from the DOM.
On the other hand, gathering the static NodeList for a querySelectorAll call requires actually iterating over the DOM to find element(s) matching the selector every time (in the naive implementation), with the trade-off of the engine not having to watch the collection internally.
As an aside, the querySelector method with an ID selector is slower but on the same order of magnitude (a difference of about a few hundred thousand ops/sec for me) as the getElementById method. So if one looks at all the data in the benchmark and not just the class ones in isolation, it becomes clear that merely parsing the selector string is not enough to drop millions/billions of operations per second down to single-digit thousands.
> e.g. having the “NavigationService” change which links should be available by listening to a state change on the “UserSession” service as a user becomes authenticated.
Could you expand on how you're achieving this? I'm creating a new project using xState at work and this is one of the things I've been trying to come up with a solution too.
My biggest problem is the pattern I've set up for routing on top of xState (It's an electron app so no URL management required which makes that easy) requires you to create the state machine yourself, so it can't for example be used as an actor in part of a larger system.
It sounds like your NavigationService and UserSession service might be similarly separated so I'm curious how they're communicating?
Something that occurred to me last night is that I can probably make a coordinator of some sort somewhere in the tree which forwards actions from state machine to another as required.
We keep things very simple. Maybe that will have to change as the code base grows, but for now it works and it easy to grok.
On start of the application we create singletons of our state machine backed services, and manage the dependencies between them manually. That is, we create an instance of the UserSessionService and then pass it into a "constructor" for the NavService. That constructing function creates the NavService and then wires them together by subscribing the NavService to listen to the state transitions of the UserSessionService.
I suppose if the code base grows and you have dozens of services to keep track of, it will get pretty unwieldy, but that is no different than using a fancy IOC container in Java/.Net. We plan on keeping things simple, while still braking apart the single store you'd have with Redux.
I hacked together a simple gist to show the basics of how we wire things together (without getting into the specifics of how the internals of the state machines work). [1]
That does help clarify things, thanks :) For context I'm just starting out this app so it's simple for now, but this is my first major project with xState so I'm trying to come up with sensible patterns.
I'll have a think about this and see if I can work it into my structure as I'm certainly going to need inter-machine communication as it grows.
It also shows my prototype for xstate based declarative data fetching, though I have sinced switched to using observables instead of promises for representing my GraphQL query results so I've changed it quite a bit.
That advertising campaign wasn't Romero's idea; it was the idea of someone else at Ion Storm. Romero was initially a bit hesitant on the slogan but caved to pressure.
At least that was how it was presented in (iirc) Masters of Doom, who knows what the actual truth is.
Ah, good to know. It's been a couple of years since I read the book and I may also have misremembered something about the ad. I definitely had the impression that the guy would have been a very difficult coworker, boss or roommate, and he was all three for parts of the team at various times!
I have an unlimited gigabit connection that I pay $120NZD per month for. And for content on good CDNs or hosted in the country I do get near the advertised speed. It took me just under 4 minutes to download the Star Wars Battlefront 2 beta from Origin which had a size of 23.78GB, averaging 90+ MB/s.
I probably use around 5TB of data per month as well.
In Poland I ave 300/30 Mbit (they connected fiber to my house recently) for 50 PLN (it is about 19 NZD, or 13 USD) per month. They offer 600/60 in some cities (where there is more competition), but I don't know the price, I think is is more in 80PLN range. And plan to start also 1 gbit plan (probably with 100 Mbit uplink).
Not the guy you asked, but here the speeds are set by the fibre infrastructure holders, not the ISP. So on residential connections the fastest you'll get is 900/500. I pay $129 NZD (~335 PLN). Not symmetrical, but damn impressive nonetheless.
Damn, that's not bad at all! For comparison, I pay 40 USD for a 20 Mbit connection with Xfinity here in the US. Oh, and I only get 1 TB of data per month.
They are not porting, they have built an emulator that runs the 360 OS and then they boot the games inside of that.
>Delving deeper, Spencer explained exactly how the emulator packages the Xbox 360 games, and how it compares to Xbox 360's emulation of original Xbox games.
>"You download a kind of manifest of wrapper for the 360 game, so we can say 'hey, this is actually Banjo, or this is Mass Effect. The emulator runs exactly the same for all the games.
">I was around when we did the original Xbox [backwards compatibility] for Xbox 360 where we had a shim for every game and it just didn't scale very well. This is actually the same emulator running for all of the games. Different games do different things, as we're rolling them out we'll say 'oh maybe we have to tweak the emulator.' But in the end, the emulator is emulating the 360, so it's for everybody."
>Asked about whether Microsoft would require permission from game publishers to adjust game code, Spencer clarified it would not be interfering with code.
>"The bits are not touched," he said. "There's some caveats, and as always I like to be as transparent as I can be on this: Kinect games won't work from the 360, because translating between the Kinect sensors is almost impossible."
I also remember watching a video where they talked about it, it had some more details. I can't remember what it's called though and I couldn't find it with a cursory search.