We used to call the job Analyst Programmer. I am not sure what thing your code did but I am pretty sure you needed it - but who could understand that gap existed ? Who could explain to an AI that it needed to create this obscure code to solve that problem. And now comes the hard part - persuading your organisation to adopt it.
AI can code - but can it understand what it missing from the organisation and persuade it to chnage - to spend years at industry conferences?
Look at Starliner.
NASA just announced that Boeing stuffed up, not with an engineering mistake (no one still knows exactly what broke) but that the whole organisation is so screwed up and so political Nasa just don’t believe Boeing can fix it.
AI cannot fix our turf wars. That’s not intelligence (humans know going to war is bad, but Putin still exists). I ye the systems we live in, and work in.
Changing those is feasible - once they are coded, transparent and open To inspection in a democracy.
We need programmable introspective systems of organisation - democracies in other words.
The engineering was not the problem - the problem was the organisation was more or less toxic and incapable of doing engineering. Writing code that won’t get used because if politics is a job we and AI can both do.
I ran a whole company on top of FreeBSD back in the day (2005 ish). It was great, and ran all my personal pcs the same way (hell, refusing to install windows to try out this bitcoin idea is even now a good idea).
But somehow Linux still took over my personal and professional life.
Going back seems nice but there need to be a compelling reason -docker is fine, the costs don’t add up any more. I do t have a real logical argument beyond that.
In the early years after 2000, FreeBSD 4 had a much better performance and reliability in any networking or storage applications in comparison with the contemporaneous Linux and Windows XP/Windows 2000.
However, in 2003 Intel introduced CPUs with SMT and in 2005 AMD introduced multi-core CPUs.
These multi-threaded and/or multi-core CPUs quickly replaced the single-threaded CPUs, especially in servers, where the FreeBSD stronghold was.
FreeBSD 4 could not handle multiple threads. In the following years Linux and Windows have been developed immediately to take advantage of multiple threads and cores, while FreeBSD has required many years for this, a time during it has become much less used than before, because new users were choosing Linux and some of the old users were also switching to Linux for their new computers that were not supported by FreeBSD.
Eventually FreeBSD has become decent again from the PoV of performance, but it has never been again in a top position and it lacks native device drivers for many of the hardware devices that are supported by Linux, due to much fewer developers able to do the necessary reverse engineering work or the porting work for the case when some company provides Linux device drivers for their hardware.
For the last 3 decades, I have been using continuously both FreeBSD and Linux. I use Linux on my desktop PCs and laptops, and in some computational servers where I need software support not available for FreeBSD, e.g. NVIDIA CUDA (NVIDIA provides FreeBSD device drivers for graphic applications, but not CUDA). I continue to use FreeBSD for many servers that implement various kinds of networking or storage functions, due to exceptional reliability and simplicity of management.
The FreeBSD threading was perhaps behind but in general, but the big things in Linux VS FreeBSD was always the 4.3 licensing lawsuits that gave Linux a momentum that BSD never caught up with.
The real difference during that early 00s was that momentum bought 2 things that made FreeBSD a worse choice (and made even more people end up using Linux):
1: "commercial" support for Linux, firstly hardware like you mentioned, but in the way that you could buy a server with some Linux variant installed and you knew that it'd run, unless you're an CTO you're probably not risking even trying out FreeBSD on a fresh machine if time isn't abundant.
Also software like Java servers comes to mind, came with binaries or was otherwise easy to get running on Linux, and even with FreeBSD's Linux layer VM's things like JVM and CLR often relied on subtle details that makes it incompatible with the Linux layer (tried running .NET a year or two ago, ran into random crashes).
2: a lot of "fresh" Linux developers had a heavy "works on my machine" mentality, being reliant on Linux semantics, paths or libraries in makefiles (or dependencies on things like systemd)
Sure there is often upstream patches (eventually) or patches in FreeBSD ports, those last are good for stable systems, but a PITA in the long run since stuff doesn't get upstreamed properly and you're often stuck when there is a major release and need to figure out how to patch a new version yourself.
The copyright lawsuit was the first moment in time when FreeBSD and the other *BSDs were left behind, while Linux was free to advance.
Nevertheless, after this initial setback they had recovered and almost a decade later, around 2003, they had become the best solution for many server applications, even if they were not so widely known as Linux, which had spread a lot during the years when *BSDs were tied in lawsuits.
The slowness of their evolution towards multi-threading, which was caused by having much less developers than Linux and less corporate support, was what has propelled again Linux in front of them and this handicap has never been recovered later.
The 2 points listed by you are of course correct, by they are linked to the continuous reduction of the number of FreeBSD users that has started in 2003 and which has lasted for several years, which was caused by the fact that even when you were already an experienced FreeBSD user and preferred it over Linux, it was pointless to install FreeBSD on any new state-of-the-art computer, because FreeBSD could not harness its power.
Yeah, I have a similar situation; FreeBSD is a great operating system, but the sheer amount of investment in Linux makes all the warts semi-tolerable.
I'm sure some people have a sunk-cost feeling with Linux and will get defensive of this, but ironically this was exactly the argument I had heard 20 years ago - and I was defensive about it myself then.. This has only become more true though.
It's really hard to argue against Linux when even architecturally poor decisions are papered over by sheer force of will and investment; so in a day-to-day context Linux is often the happy path even though the UX of FreeBSD is more consistent over time.
I know this comment is effectively a side tangent on a side tangent. but that was always the strangest thing to me as well. I remember in 2012 when I was debating fiddling around with Bitcoin. that was one of the things that turned me off. I was sure that there was no way something as brilliant as this was supposed to be was developed by windows user.
Which surely says something about all these ideological purity tests
Windows developers (like sysadmins) are of two kinds in my experience.
People who don't understand shit about how the system behaves and are comfortable with that. "I install a package, I hit the button, it works"
.. and
People who understand very deeply how computers work, and genuinely enjoy features of the NT Kernel, like IOCP and the performance counters they offer to userland.
What's weird to me is that the competence is bimodal; you're either in the first camp or the second. With Linux (+BSD/Solaris etc;) it's a lot more of a spectrum.
I've never understood exactly why this is, but it's consistent. There's no "middle-good" Windows developer.
The (install package, press button, it works) is great when you just want a boring OS since the interest is elsewhere rather than an itch on making the machine as perfect extension of onself.
The machine and installation is just fungible.
I think I've had Linux as a primary OS 2 times, FreeBSD once and osX once, what's pulled me back has been software and fiddling.
I'm on the verge of giving Linux or osX another shot though, some friends has claimed that fiddling is virtually gone on Linux these days and Wine also seems more than capable now to handle the software that bought me back.
But also, much of the software is available outside of Windows today.
Unix is easier to understand than the NT mess and everything it's in the open and documented, so you can achieve a good level of knowledge in the middle. OTOH in order to understand NT deeply you must be a reverse engineer. Also, on the other side, crazy experts under Wine (both ways, Unix and NT) OpenBSD and 9front do exist on par of these NT wizards. It just happen with Unix/9f you climb an almost flat slope (more in the second) due to the crazy simple design, while with NT the knowledge it's damn expensive to earn.
With 9front you OFC need expertise on par of NT but without far less efforth. The books (9intro), the papers, CSP for concurrency... it's all there, there's no magic, you don't need ollyDBG or an NT object explorer to understand OLE and COM for instance.
RE 9front? Maybe on issues while debugging, because the rest it's at /sys/src, and if something happens you just point Acid under Acme to go straight to the offending source line. The man pages cover everything. Drivers are 200x smaller and more understandable than both NT and Unix.
Meanwhile to do that under NT you must almost be able to design an ISA by yourself and some trivial compiler/interpreter/OS for it, because there's no open code for anything. And no, Wine is not a reference, but a reimplementation.
That's kinda true for older/integrated parts of Windows, lots and lots of functionality that people have come to rely on over the years, but also huge black-boxes that you need to not be intimidated at probing into to solve weird issues (that often becomes understandable if you have enough experience as a developer to interpret what the API surface tells about the possible internal implementation).
Errr… Galileo was asked to write a book discussing both sides of the heliocentric / geocentric debate … and so wrote a book with two characters having a debate while walking in a garden - one named (I paraphrase for effect) “Galileo” and one named “Pope Simplehead”
Needless to say the next twenty years under house arrest gave him a lot of time to think about character names :-)
To me this is just one more pillar underlying my assumption that self driving cars that can be left alone on same roads as humans is a pipe dream.
Waymo might have taxis that work in nice daytime streets (but with remote “drone operators”). But dollars to doughnuts someone will try something like this on a waymo taxi the minute it hits reddit front page.
The business model of self driving cars does not include building seperated roadways and junctions. I suspect long distance passenger and light loads are viable (most highways can be expanded to have one or more robo-lanes) but cities are most likely to have drone operators keeping things going and autonomous systems for handling loss of connection etc. the business models are there - they just don’t look like KITT - sadly
How does Waymo fix it? They have to be responsive to some signs (official, legitimate ones such as "Lane closed ahead, merge right") so there will always be some injection pathway.
They've mapped the roads and they don't need to drive into a ditch just because there's a new sign. It probably wouldn't be all that hard to come up with criteria for saying "this new sign is suspicious" and flag it for human review. Also, Waymo cars drive pretty conservatively, and can decide to be even more cautious when something's confusing.
Someone could probably do a DOS attack on the human monitors, though, sort of like what happened with that power outage in San Francisco.
Given Waymo's don't actually connect LLMs to wheels, they are pretty safe.
Even if you fool the sign-recognizing LLM with prompt injection, it'll be an equivalent of wrong road sign. And Waymo is not going to drive into the wall even if someone places a "detour" sign pointing there.
To me the issue is not security agencies use Pegasus, but foreign security agents physically assault British citizen in London, MI6 does bugger all.
I’m not sure what I want from our security services, but security sounds good.
Also I wonder if there is a background level of foreign agent activity they accept and how is that related to the police’s paradox of using confidential informants
> if there is a background level of foreign agent activity they accept
Yup, there's constantly a number of known spies in pretty much all countries. If all were ejected, the other country would do the same to their spies and monitoring compliance with projects would become harder - so it seems in everyone's interest to not be too strict. See for example https://johnsontr.github.io/assets/files/spies_current.pdf
Also there's the idea of "the optimal amount of fraud is non-zero" which generalises to lots of things - including this one.
Someone being beaten up on the streets is domestic policing issue.
That the perpetrators may turn out to be foreign agents is neither here nor there, only if they were diplomatic staff would it not be a domestic policing issue. However the UK police have largely withdrawn from certain areas, and this would simply be another symptom.
High Court action suggests there was a civil case pursuing the perpetrators (or their principals), rather than a criminal case. With a properly functioning police system, that should not be necessary.
Kahn is the PCC for London, he sets their priorities.
Doesn’t matter if it’s people being poisoned with polonium or getting beaten up, preventing the activities of foreign intelligence services is generally not the job of the PCC.
It is the job of the British intelligence services to blow someone up in Riyadh to deter these activities.
London just had the lowest annual murders for 11 years.
>Homicide rate now 1.1 per 100,000 people, lower than any other UK city and major global cities including New York (2.8), Berlin (3.2) and Toronto (1.6)
It’s almost as if the world is wide and we are siloed.
For example “High School Musical”
Made a billion dollars withoute even knowing such a thing existed.
Edit
This is the first I heard of this as well, but it bothers me. Along with the Salisbury poisonings I would be interested in how any criminal activities foreign agents are suspected of doing in the UK (Russia presumably heading the list)
The very first computers (Manchester baby) used CRTs as memory - the ones and zeros were bright spots on a “mesh” and the electric charge on the mesh was read and resent back to the crt to keep the ram fresh (a sorta self refreshing ram)
Yes, but those were not the standard kind of CRTs that are used in TV sets and monitors.
The CRTs with memory for early computers were actually derived from the special CRTs used in video cameras. There the image formed by the projected light was converted in a distribution of charge stored on an electrode, which was then sensed by scanning with an electron beam.
Using CRTs as memory has been proposed by von Neumann and in his proposal he used the appropriate name for that kind of CRT: "iconoscope".
DRAM memories made with special CRTs with memory have been used for a few years, until 1954. For instance the first generation of commercial electronic computers made by IBM (scientific IBM 701 and business-oriented IBM 702) have used such CRTs.
Then the CRT memories have become obsolete almost instantaneously, due to the development of magnetic core memories, which did not require periodic refreshing and which were significantly faster. The fact that they were also non-volatile was convenient at that early time, though not essential.
Today, due to security concerns, you would actually not want for your main memory to be non-volatile, unless you also always encrypt it completely, which creates problems of secret key management.
So CRT memories have become obsolete several years before the replacement of vacuum tubes in computers with transistors, which happened around 1959/1960.
Besides CRT memories and delay line memories, another kind of early computer memory that has quickly become obsolete was the memory with magnetic drums.
In the cheapest early computers (like IBM 650), the main memory was not a RAM (i.e. neither a CRT nor with magnetic cores), but a magnetic drum memory (i.e. with sequential periodic access to data).
There is a tendril vibrating on the spiders web in society - this and the equally horrific case in France just suggests that rape and sexual abuse are far deeper than perhaps most of us ever assumed
And that’s probably the OpenAI killer. If any of my work product from now to 2030 could legitimately be entangled in any of the millions of coming copyright claims, I am in a world of hurt.
This fast run to use LLMs in everything can be undone by one court decision - and the sensible thing is to isolate as much as you can.
Also I don't think it will be easy to defend a copyright on AI-generated images, especially if your IP is 'lot of humanoid soldiers in power armor' and not specific characters.
> If any of my work product from now to 2030 could legitimately be entangled in any of the millions of coming copyright claims, I am in a world of hurt.
right... there has been ample code and visual art around to copy for decades, and people have, and they get away with it, and nothing bad happens, and where are the "millions of coming copyright claims" now?
i don't think what you are talking about has anything to do with killing openai, there's no one court decision that has to do with any of this stuff.
> there has been ample code and visual art around to copy for decades, and people have, and they get away with it, and nothing bad happens
Some genres of music make heavy use of 'samples' - tiny snippets of other recordings, often sub-5-seconds. Always a tiny fraction of the original piece, always chopped up, distorted and rearranged.
And yet sampling isn't fair use - the artists have to license every single sample individually. People who release successful records with unlicensed samples can get sued, and end up having to pay out for the samples that contributed to their successful record.
On the other hand, if an artist likes a drum break but instead of sampling it they pay another drummer to re-create it as closely as possible - that's 100% legal, no more copyright issue.
Hypothetically, one could imagine a world where the same logic applies to generative AI - that art generated by an AI trained on Studio Ghibli art is a derivative work the same way a song with unlicensed drum samples is.
I think it's extremely unlikely the US will go in that direction, simply because the likes of nvidia have so much money. But I can see why a cautious organisation might want to wait and see.
Indemnification only means something if the indemnifying party exists and is solvent. If copyright claims on training data got traction, it would be neither, so it doesn't matter if they provide this or not. They probably won't exist as a solvent entity in a couple years anyway, so even the question of whether the indemnification means anything will go away.
AI can code - but can it understand what it missing from the organisation and persuade it to chnage - to spend years at industry conferences?
Look at Starliner. NASA just announced that Boeing stuffed up, not with an engineering mistake (no one still knows exactly what broke) but that the whole organisation is so screwed up and so political Nasa just don’t believe Boeing can fix it.
AI cannot fix our turf wars. That’s not intelligence (humans know going to war is bad, but Putin still exists). I ye the systems we live in, and work in.
Changing those is feasible - once they are coded, transparent and open To inspection in a democracy.
We need programmable introspective systems of organisation - democracies in other words.
The engineering was not the problem - the problem was the organisation was more or less toxic and incapable of doing engineering. Writing code that won’t get used because if politics is a job we and AI can both do.
reply