Yeah the solar array on Starlink is held perpendicular to the velocity vector, so the cross section relative to the colliding body will invariably be smaller than the worst case.
It's interesting to try to create a metric of collision avoidance "stress" and resiliency to outages. I don't think this is a particularly useful one (and the title is alarmist/flamebait), but it is a first cut at something new. A more nuanced aggregate strategy for different orbital altitudes would make sense. Maybe some can suggest (or has already suggested) a comprehensive way to keep the risk of cascading debris events low (and measured) that is useful for launch planning.
Complete loss of control of the entire Starlink constellation (or any megaconstellation) for days at a time would be an intense event. Any environmental cause (a solar event) would be catastrophic ground-side as well. Starlink satellites will decay and re-enter pretty quickly if they lose attitude control, so it's a bit of a race between collisions and drag. Starlink solar arrays are quite large drag surfaces and the orbital decay probably makes collisions less likely. I would not be surprised if satellites are designed to deorbit without ground contact for some period of time. I'm sure SpaceX has done some interesting math on this and it would be interesting to see.
Collision avoidance warnings are public (with an account): https://www.space-track.org/ But importantly they are intended to be actionable, conservative warnings a few days to a week out. They overstate the probability based on assumptions like this paper (estimates at cross-sectional area, uncertainty in orbital knowledge from ground radar, ignorance of attitude control or for future maneuvers). Operators like SpaceX will take these and use their own high-fidelity knowledge (from onboard GPS) to get a less conservative, more realistic probability assessment. These probabilities invariably decrease over time as the uncertainty gets lower. Starlink satellites are constantly under thrust to stay in a low orbit with a big draggy solar array, so a "collision avoidance manuever" to them is really just a slight change to the thrust profile.
Interesting stuff in the paper, but I'm annoyed at the title. I hate when people fear-bait about Kessler syndrome against some of the more responsible actors.
By now, I'm convinced that Kessler syndrome exists solely to be fear bait. Almost no one knows what it is or what it does - people just know the stupid "space is ruined forever" media picture.
If you're interested in building something, Planet released an open source hardware/software satellite radio that works over amateur radio bands for ~$50: https://github.com/OpenLST/openlst
I can't believe how uninformed, angry, and still willing to argue about it people were over this. The whole point was a very reasonable compromise between a legal requirement to scan photos and keeping photos end-to-end encrypted for the user. You can say the scanning requirement is wrong, there's plenty of arguments for that. But Apple went so above and beyond to try to keep photo content private and provide E2E encryption while still trying to follow the spirit of the law. No other big tech company even bothers, and somehow Apple is the outrage target.
There is absolutely no such legal requirement. If there were one it would constitute an unlawful search.
The reason the provider scanning is lawful at all is because the provider has inspected material voluntarily handed over to them, and through their own lawful access to the customer material has independently and without the direction of the government discovered what they believe to be unlawful material.
The cryptographic functionality in Apple's system was not there to protect the user's prviacy, the cryptographic function instead protected apple and their datasources from accountability by concealing the fingerprints that would cause user's private data to be exposed.
A law by the government requiring proactive scanning of photos would in fact make the whole situation worse in the US because there would need to be a warrant if the government is requiring the scan. As long as it's voluntary by the company and not coerced by the government, they can proactively scan.
If your core concern is privacy, surely you'd be fine with "no bytes ever leave my device". But that's a big-hammer way to ensure no one sees your private data. What about external (iCloud/general cloud) storage? That's pretty useful, and if all your data is encrypted in such a way that only you can read it, would you consider that private? If done properly, I would say that meets the goal.
What if, in addition to storage, I'd like to use some form of cloud compute on my data? If my device preprocesses/anonymizes my data, and the server involved uses homomorphic encryption so that it also can't read my data, is that not also good enough? It's frustrating to see how much above and beyond Apple has taken this simple service to actually preserve user privacy.
I get that enabling things by default triggers some old wounds. But I can understand the argument that it's okay to enable off-device use of personal data IF it's completely anonymous and privacy preserving. That actually seems very reasonable. None of the other mega-tech companies come close to this standard.
iCloud is opt in. This should be too. A lot of people are fine with keeping their photos offline-only and syncing with their computers through a cable.
Making it “private” with clever encryption is their job since Apple wants to sell privacy. They aren’t doing it because they are nice or care about us. Plus, code is written by people and people write bugs. How can you tell this is truly bug-free and doesn’t leak anything?
Ultimately, making it opt-in would be painless and could be enabled with a simple banner explaining the feature after the update or on first boot, like all their opt-in features. Making it opt-out is irresponsible to their branding at best and sketchy to their users at worst, no matter how clever they say it is.
No — users should be the ones to decide if “encrypted on remote storage” is a beneficial trade off for them and their particular situation.
I think there’s some weird impulse to control others behind these decisions — and I oppose that relationship paradigm on its own grounds, independent from privacy: a company has no business making those choices for me.
You are free to use such services if you wish; others are free not to use those services.
It's a web of danger for sure. Configuring CI in-repo is popular (especially in the Gitlab world) and it's admittedly a low-friction way to at least get people to use config control for CI (or use CI for builds at all). I think the number of degrees of freedom is really a footgun.
I remember early Gitlab runner use when I had a (seemingly) standard build for a docker image. There wasn't any obvious standard way to do that. There were recommendations for dind, just giving shell access, etc. There's so much customization that it's hard to decide what's safe for a protected/main branch vs. user branches.
I don't have a solution. But I think it would be better if, by default, CI engines were a lot less configurable and forced users to adjust their repo and build to match some standard configurations, like:
- Run `make` in a Debian docker image and extract this binary file/.deb after installing some apt packages
- Run docker build . and push the image somewhere
- Run go build in a standard golang container
And really made you dance a little more to do things like "just run this bash script in the repo". Restrict those kinds of builds to protected branches/special setups.
Having the CI config in the same source control tree is dangerous and hard to secure. It would probably be better to have some kind of headless branch like Github pages that is just for CI config.
I worked for about a year with a consulting firm that handled "Y2K compliance". Unlike this Andersen exercise in legal face-saving, it was a real job. Big companies hired us to do a full inventory of their site equipment (this included manufacturing plants, Pharma stuff) and go line by line with their vendors and figure out which components had known Y2K issues, which had not been tested at all, and which ones were fine/had simple fixes. We helped them replace and fix what needed to be fixed.
Y2K was a real problem. The end-of-the-world blackouts + planes falling from the sky was sensationalism, but there were real issues and most of them got fixed. Not trying to take away from this very interesting story of corrupt cronyism, but there were serious people dealing with serious problems out there. "Remember Y2K? Nothing happened!" is a super toxic lesson to take away from a rare success where people came together and fixed something instead of firefighting disasters.
...and 24 years later, after the paperwork has been filed away, someone will still write that the problem never existed. Y2K minimization and anti-vaxx sentimental are 2 symptoms of problems solved so successfully, the magnitude of the problem disappears from the collective consciousness.
Place I used to work had a cycle of "Everything is working. We don't need quite this much IT staff." and "Everything is broken. Clearly we need more IT staff."
How the people in charge of this stuff never noticed the cycle is beyond me.
I've seen this phenomenon play out multiple times in my professional career, where a considerable amount of effort goes into creating a robust system only for the effort to be minimized by management due to its stability. Somehow preventing a plane from crashing is not as valuable as digging through the ashes.
In agreement with your overall point, accounting and legal is different though.
For accounting, it's not a simple cost center, and is at the front line to show the numbers. They can say how much it will cost to not comply with a rule, or how much they saved by creativity or ingenuity. Being that close to the money is a tremendous advantage.
Legal is more distant, but there's a clear scale of how much is on the line. When you review a contract, it's pretty clear what’s at stake if legal work is botched.
That's my main takeaway: if you care about money, you need to be as close to it as possible. At the same skill level, dealing with user security or financial transaction security won't pay the same.
That's why presentation and "sales" type skills can be useful. A bit of doom-mongering internal PR about the problem, then present the solution. Don't just solve it quietly.
> "Remember Y2K? Nothing happened!" is a super toxic lesson to take away […]
See perhaps:
> Normalcy bias, or normality bias, is a cognitive bias which leads people to disbelieve or minimize threat warnings.[1] Consequently, individuals underestimate the likelihood of a disaster, when it might affect them, and its potential adverse effects.[2] The normalcy bias causes many people to prepare inadequately for natural disasters, market crashes, and calamities caused by human error. About 80% of people reportedly display normalcy bias during a disaster.[3]
> Optimism bias or optimistic bias is a cognitive bias that causes someone to believe that they themselves are less likely to experience a negative event. It is also known as unrealistic optimism or comparative optimism.
I got my first IT job doing Y2k compliance. About 20% of our systems broke with the date changed to 01/01/2000, including the PABX (continual reboot cycle) and the sales leads system which would crash every few minutes.
> "Remember Y2K? Nothing happened!" is a super toxic lesson to take away from a rare success where people came together and fixed something instead of firefighting disasters.
My cynicism about Y2K comes from the fact that there were a lot of snarky articles written about how certain countries or companies were not Y2K ready but nothing bad seemed to happen to those countries either. It seems like a natural experiment was conducted and the results indicate there was no correlation between good outcomes and the work done to be Y2K ready.
I have no doubt that the armies of consultants did fix real issues but anyone working in software knows there is a never ending set of things to fix. The real issue is whether that work was necessary for the ongoing functioning of business or society.
"but nothing bad seemed to happen to those countries either."
Bad things still happened everywhere, despite all our efforts. How bad depends on your perspective.
Several people suffered a bizarre form of resurrection, which normally, Christians would be all over it and jolly excited. Pensions suddenly started paying out, tax bills became due from people long dead. If you were not a relative of one of those people it did not affect you and if you read about it, you'd have perhaps said "typical" and got on with life.
Some devices just went a bit weird and needed turning off and on again. Who cares or even noticed? Someone did but again, you did not hear about those.
I spent quite a while patching NetWare boxes and applying some very shaky patches to Windows workstations. To be honest, back then, timezone changes were more frightening than Y2K - they happen twice a year and something would always crash or go wrong.
The sheer amount of stuff that was fixed was vast and I don't think your "countries that did and did not" thought experiment is valid. Especially as it is conducted without personal experience nor much beyond a bit of "ecce, fiat" blather.
Nowadays time is so easy to deal with. Oh, do you remember when MS fucked up certificates and Feb 29 a few years back?
Your examples make my point. Some bad things happened but not on a catastrophic level that warranted the level of investment that was put into Y2K projects.
Most of the companies I was familiar with then did not have enough time or resources to check for and resolve every problem, and these problems were very real. At some companies the engineers were given autonomy, authority, and effectively unlimited budget to do literally whatever was required to mitigate any publicly visible failures that occurred. We had a lot of backup plans to keep operations running, sometimes literally paper and pencil, when the inevitable failures occurred. A lot of companies were furiously faking it and throwing people at the problem.
I directly witnessed a few near catastrophic failures due to Y2K at different companies, literally company killers. We kept everything (barely) running long enough to shore up and address the failures without anyone noticing, partly because we had prepared to operate in the face of those failures since we knew there was no way to fix them beforehand. It was a tremendous decentralized propaganda coup. No one wanted to be the company that failed as a result, the potential liability alone was massive.
The idea that what was averted was minor is a pretty naive take. I was genuinely surprised that we actually managed to hold some of the disasters together long enough — out of sight and out of mind — to fix them without anyone noticing critical systems were offline for months. IT was a bit more glitchy, slow, and unavailable back then, so the excuses were more plausible.
When things got missed things went _badly_ wrong and that spurred businesses to take rapid action to respond.
The first "Y2K" bugs where when banks' computer systems started messing up the calculations of long date financial securities/mortgages - decades before the millennium. Closer to the time Supermarkets started junking food that had a post 1999 Best Before date. Those were company ending problems if not fixed and so got overwhelming and rapid focus.
"... a lot of snarky articles written about how certain countries or companies were not Y2K ready..." I know you're talking about articles written after Jan 1, 2000. But there were a lot of articles written before then that were Jeremiah doomsday articles, so the snarky articles were reacting in part to equally wrong articles before then.
One article I recall in particular was in Scientific American some time in (IIRC) 1998 or early 1999. It prophesied (I use that word intentionally) that no matter how much money and effort was put into fixing the problem ahead of time, there would be all kinds of Bad Things happening on January 1. It called out in particular computers that were said to be unreachable, like hundreds of feet underwater on oil platforms. (Whether there actually were such computers, I don't know.) There was a sort of chart with the X-axis being effort spent on preventing the problem and the Y-axis being the scale of the resulting disaster. The graph leveled off while still in the "disaster" range, but still presented a clear message: "Give us more money and we can prevent catastrophes".
Somehow I haven't been able to find that article. Maybe SciAm suppressed it when the outcome turned out to be way short of a disaster.
There was also a TV (remember that?) news site or three that planned coverage beginning on midnight December 31 somewhere in Europe (Russia and China were off the map, I don't remember about Japan). Of course the news was that there was no news. (Yes, there were some computer programs that died or spit out junk, but nothing rising to the level of news.) I think it was an hour or two after midnight Eastern Time (US) that they ended the news cast.
Was there a Y2K problem? Of course. But it was largely taken care of before January 1, 2000, Y2K Jeremiahs notwithstanding.
I think it's either going to be a retirement plan for many who are young-ish IT people right now, or "optimists hoard gold, pessimists hoard cans and ammo" time with the pessimists being right. And a lot of this depends on how decision makers will remember Y2k.
Nobody is going to waste as much money on it as they did with Y2K and it's way more common for computers to actually use epoch time... but I think almost everything uses 64-bit time now and we're still more than a decade away.
(Don't reply with examples of things that use 32-bit time.)
That narrative is only in the uneducated public. Every Y2K perfect of course documented its many findings and fixes.
In fact, the REASON Y2K got so much budget and attention were the early companies that started discovering the issues and alerted the others. Notables include IBM, General Motors, Citibank and American Express.
Agreed it was a nice success. We also did pretty well in paperless office, the ozone layer and acid rain, automobile and airplane safety, and the war on cancer, and now obesity and diabetes.
The public were not uneducated about this. If you remember how Y2K was presented to the public, it was ridiculously extreme - planes crashing, economies collapsing, etc. None of that happened, and not because all the bugs were fixed.
You can't fix all bugs, so if the consequences really were going to be catastrophic then you'd expect at least a handful of catastrophes to sneak through, but that didn't happen at all.
> None of that happened, and not because all the bugs were fixed.
No one is in a position to assert that. We have very little idea how fragile our civilization is. Perhaps it's pretty robust, and networks of interconnected problems (like Y2K) stand no chance of snowballing out of control. Or perhaps it's really, really fragile, and surprisingly little stands between us and a profound collapse.
It's very difficult to be certain, because it's such a complicated system, and one that we can't really test to destruction.
Would all the Y2K bugs have caused a widespread systematic failure if they'd gone un-fixed? Probably not... but maybe? Just like all low-probability, high impact risks, it's very hard for us to reason about.
How much money is it wise to spend on averting the risk of giant asteroid impacts? Hard to say. Probably more than you think, though.
The fact that it went so well is not evidence that no original issue existed. On the other hand, maybe it's evidence that we over-invested a bit into diminishing returns.
A perfectly fine-tuned response would have a little bit more to fix on January 1. Of course, expecting society to perfectly fine-tune the response for something poorly understood is hard.
Which only makes it more interesting. There are many takeaways one can have from this article, one of them is that:
- Problem X is serious.
- Y will address problem X
Is incomplete reasoning, or even an outright fallacy. Just because it's claimed that Y will address X doesn't mean it actually will.
Especially on high-stakes issues ("our business will collapse", security, safety) or emotive issues (social justice, security) this type of flawed reasoning seems to be a common problem.
The Snoo is great and the key feature that actually helps prevent SIDS is the restraints and swaddle, which is not being moved to a subscription here. It's actually FDA approved to reduce the risk of SIDS. The "bonus" rocking and soothing noises just help parents get more sleep.
The Snoo is very expensive and easy to pass down or buy used. I think they probably screwed up by selling it outright. You can rent the Snoo, which is probably a better model for everyone. This is kind of a janky way to pull back some of the rental revenue they lost by selling a durable product that people only need for a few months.
It feels gross, I get it. But it's effectively a $100 per child fee which is quite reasonable given the benefits. And there's no realistic way to charge for that other than subscription for the premium (non-safety) stuff. The alternative is to keep developing new models with new features and adding crap people don't need. One thing I love about the original Snoo is that it works fine without an Internet connection or app. I used the app and it was great, but it's nice to know that when you travel or lose power, it can still rock your baby and soothe them. I hope that's still the case if there's a subscription involved.
> the key feature that actually helps prevent SIDS is the restraints and swaddle
Just a note that the NIH guidelines specifically call out this marketing claim as BS:
> Even though swaddling does not reduce the risk of SIDS, some babies are calmer and sleep better when they are swaddled. Even though swaddling does not reduce the risk of SIDS, some babies are calmer and sleep better when they are swaddled.
They also call out the monitors specifically as also useless for SIDS and issue a general warning that products that claim to reduce SIDS are nearly-universally not useful and are often counterproductive.
> And there's no realistic way to charge for that other than subscription for the premium (non-safety) stuff. The alternative is to keep developing new models with new features and adding crap people don't need.
There’s another alternative: simply sell them for a little more than they cost. Just keep doing that. Solid business plan.
This was before async/generators were added to JS and callback hell was quite real. I wanted to shape it in the way I’d learned to program in Visual Basic. Very human readable. The result is no longer useful, but it was a fun goal to have the compiler compile itself.
Active debris removal (harpoon satellites, magnet arms, whatever) are not a solution to this problem and are a huge waste of money. These missions answer the question "could one dock with debris and deorbit it?" To which the answer is "obviously yes, but at enormous cost" and you don't need to spend 50M euros to prove it.
The answer is exactly what governments and industry have been doing for at least two decades now. Tracking of in-orbit objects, coordinated conjunction response, and rules that require either manual or drag-induced reentry cleanup at the end of a mission. Active maneuverable satellites in orbit (like Starlink) aren't a fundamental problem. The number of objects has gone up significantly, but the big actors are coordinating and following good practices.
> Active debris removal (harpoon satellites, magnet arms, whatever) are not a solution to this problem and are a huge waste of money.
This is wrong because it's based on a flawed assumption.
That assumption being: Propellant is required to deorbit debris, and the rocket equation makes launching all that propellant prohibitively expensive.
And while we can't do anything about the rocket equation, we don't actually need to have propellant in space to deorbit things.
Ways to deorbit without propellant in space:
1. The ground based methods. Although these would likely be seen by superpowers as military escalation of the status-quo.
2. Propulsion-less drone satellites. All propulsion-less designs use some form of sail which can be used to change the drone's orbit to match the debris before latching on and towing it to a new orbit. Once the debris is now in a decaying or graveyard orbit, the drone can detach and go after it's next target. All that is needed is time, power (readily accessible via solar power this close to the sun), and reaction wheels (which now we know what caused previous designs to fail like in the Kepler mission, can be built to last).
The most common form of sails would be solar sails, but there's also EDTs and magnetic sails.
What about ablation and/or ablative thrust using lasers?
You'd need to fuel a laser platform, but it could target debris over a huge region. The goal would be to both reduce size and to gently nudge smaller debris to lower (and atmosphere-intersecting) orbits.
You don't need to put a laser in space to do this.
You build a ground based laser, and fire it at objects when they're approximately directly overhead. Pushing upwards on the object basically rotates it's orbit so one side of it will not be lower into the atmosphere.
I'd argue that even propellent-less deorbit devices are a waste of time. The best answer is what we're doing now: rules about deorbit capability and orbit lifetime, as well as debris production. Even when there are failures, as long as they are a small enough percentage of the pie, debris won't accumulate faster than it clears.
Additionally, all the propellant-less solutions are low-thrust (or ground-based, which is another thing entirely). It's absolutely possible to orbit match, dock, and deorbit an object, but whatever low-thrust device you're using is going to deorbit as well. Maybe it's possible to launch a bunch of small devices like this to do cleanup, but it's not necessary or worthwhile.
This is a great example of a solution that sounds fun and interesting to a problem that's easy to understand at a surface-level. It gets attention and funding, but the real unsexy stuff (tracking, monitoring, collision avoidance) is where the money should go.
Depending on how far costs fall, one potential problem would be an equivalent of "flags of convenience" in sea-based shipping. Small countries with little or "favourable" (e.g., short-term profits) regulatory regimes could sustain at least some launch capability. Unless there's some way of reining in that activity, I see the problem manifesting to at least some degree.
Even now, getting launch-capable countries on board with restrictions is a likely problem. The US, EU, and Japan perhaps not so much, but of Russia, China, India, Pakistan (potentially), and North Korea, rather more plausibly.
Most of the countries named are already under heavy sanctions, and have proved resilient against them to a large extent.
One problem with the Kessler Syndrome is that it's a runaway phenomenon, though one that evolves more slowly than most people appreciate. A few bad actors could trigger events which slowly start to seriously degrade at least low-to-mid Earth orbital ranges.
Geosynchronous orbits are possibly less susceptible as the entire orbital ring is large, though geostationary orbital space, strictly along Earth's equator, is more constrained. The Starlink approach of putting comms satellites in very low Earth orbit, which clears fairly quickly, possibly mitigates this in two ways (it makes geosync less critically necessary, and de-orbits satellites quickly). But LEO is still where higher orbits eventually decay to, and might itself be affected with time as well.
The lax regulatory problem, which invokes another underappreciated economic principle, Gresham's Law, is one that's appeared elsewhere and has proved hard to counter. I'd suggest not underestimating its possible noxious effects.
If solar sail probes can change their orbits over time (which they can), then they already have enough thrust. There's no static friction to overcome, so there isn't a minimum thrust that you need to reach.
As long as you can continue to apply thrust over time, then you have a solution.
It doesn't matter if it takes 6 hours, 6 weeks or 6 months. Even a single probe moving the right piece of debris prevents tens of thousands of more pieces being generated. Imagine what three of them could do, or twenty...
A solar sail isn't a pressure vessel, fuel tank, electronics bay, or other sensitive instrument. It can take considerable abuse before being substantially degraded, let alone failing.
Space debris impacting on a solar sail would all but certainly simply punch neat holes through it. Much as accumulating dust slowly degrades the light-gathering capability of a large reflector telescope, a modestly-perforated solar sail would lose a very small fraction of its effectiveness. But you could probably lose a heck of a lot of surface area before those effects became significant. Strength of the sail itself is probably a minimal concern, though a design with periodic reinforcing threads (themselves having a cost of increased mass) might be more than sufficient to address any strength compromise.
the solar sail would need to be incredibly large for a craft that can move to arbitrary locations in orbit for the purposes of de-orbiting satellites, which means it must have large sails so that both the de-orbiter and the de-orbitee can both be moved via sunlight alone, and achieve the required thrust vector in a reasonable amount of time (this de-orbiter must de-orbit many satellites, remember?)
the larger the sail is, the more likely it is to be affected by space debris at any given moment.
debris passing through a taught mylar solar shield will tear holes in the mylar which are the shape of the debris passing through. any sharp corners in the debris will leave a sharp corner in the hole, and those corners are going to become tears the next time something passes through nearby. the tiny holes become large holes pretty quickly.
other materials will behave differently of course, but i don't know of any solar sails at all, never mind ones made out of not-mylar.
> The answer is exactly what governments and industry have been doing for at least two decades now.
To a degree. It helps to not blow shit up, in weapons tests or otherwise. These tests arise due to the weaponization of orbit (specifically), so the goal is really to not weaponize orbit - which governments have been doing the exact opposite of. All nations are deciding to tear Solomon's baby to shreds, instead of having shared custody.
If people want to weaponize space then, sure, go right ahead.
Unless the status quo drastically changes, i.e. bickering old fools being voted out (or removed via other means where voting is not possible/fair) throughout the world, Kessler syndrome is inevitable. I'd wager it happens sooner than runaway global warming.
The way a laser broom works is imparting an extremely miniscule bit of momentum every time the object is in line of sight of the laser. Over time you lower its orbit enough for atmospheric drag to take over. For small debris, like baseball sized chunks of insulation, it takes months to deorbit the objects. For something the size of a satellite it would take an order of magnitude longer than the life expectancy of the satellite, and that's assuming it does no station-keeping.
Laser brooms are great because they can deorbit a lot of debris in parallel, which is great if your goal is to slowly clean up an orbit. They are pretty much the worst option for deorbiting a specific object quickly which is a hard requirement for any anti-satellite weapon.
Makes sense but are there actually limits that prevent the scaling? IE instead of a single laser broom I build 50 near a solar/nuclear plant and use the surplus energy. Usually they all deorbit different objects but I could choose to aim them all at the same object.
I don't see much of a limit to scale. Satellites can only dissipate so much heat so you don't even have to deorbit it to be successful, you just need impart enough energy to overheat the satellite or disable key parts.
A single broom for debris clearing is an immense project - likely $500 million to $1 billion to construct. It is basically a big observatory telescope with a gigantic high power laser - it is large, fragile, and completely immobile. That's for slowly cleaning up an orbit, to scale up to a weapons system you'd need a system equivalent to building tens of thousands of them.
You're not building a weapon system with surplus energy. Energy costs are actually pretty negligible - with 0.01% electric to kinetic efficiency it's only about 7 million dollars worth to bring down a 1000 kg satellite. The issue is the equipment. To deorbit that satellite in a year, you would need to on average be pumping 7.8 gigawatts of electricity into the system. If you put literally all of the US's electricity production into it, you could deorbit that single satellite in 2.4 days. And note that is if you could constantly keep the satellite in field of view, realistically only a small percentage of a satellite's orbit will be even under the best of circumstances, and many orbits won't be in view at all.
Yeah, there's nothing physically stopping someone from building such a system, but you're talking about putting basically all of a large nation state's GDP for decades into one weapon system that, in the best case scenario, is going to do an extremely minute amount of damage and realistically would be destroyed long before it could accomplish anything.
Disabling a satellite poses different technical challenges. For momentum transfer, so long as you're hitting the object you're good, to target critical systems would require far greater precision, and dumping heat faster than the satellite can dissipate it would require even higher instantaneous power. Remember the satellite is only very briefly in view. While this is almost certainly more feasible than a weapon that works by deorbiting, it is still very difficult. Laser weapons for disabling and destroying aircraft and airborne munitions, which is an inherently easier task, is an active field of research that many billions of dollars have been dumped into over the years, with no system yet being demonstrated as effective. Satellites are just faster moving, more distant targets for such systems.
ASAT missiles might be too mundane for the megalomaniac Bond villain, but they are an immensely more practical solution to the problem.
> These missions answer the question "could one dock with debris and deorbit it?" To which the answer is "obviously yes, but at enormous cost" and you don't need to spend 50M euros to prove it.
Well, it's not about debris, it's about the capability to sneak up to an enemy satellite and disrupt it without outright destroying it or making it look like a failure.
Shooting a satellite with a rocket ("ASAT") is easy enough - the US, China, Russia and India have proven that capability, and Israel likely has it as well. EMPs from nuclear blasts are another option. But either of that leaves undeniable traces (an EMP blast would likely fry a lot of stuff on the ground!) and the debris can endanger your own satellites as well, so you need something that acts in-space, preferably very difficult to observe from Earth. And something that can grab a dead satellite and drag it out of orbit can also just go and deposit a small explosive charge.
And at that point, 50 million euros are chump change to test that capability - if needed, replace the magnet/hook/whatever with a bomb and that's it.
And make sure that even in the most unhappy cases, you vent your tanks! Vent vent vent vent! Tank explosions are where the really nasty debris numbers come from.
What do you do with the added deltaV from the venting? (not sure if significant)
It could send the already-out of control rocket stage/object to a weirder or worse orbit, increasing the changes of collision.
Ideally you have several vents perpendicular to the orbital path and open them at the same time, so the vectors cancel out. That's the happy case. If, because we are in the sad case, we can't get that, it's still better to have one piece of debris versus thousands.
Starlink have a number of debris problems, including pieces hitting the ground, so I wouldn't say that they're actually following through with their good practices.
I think that there were no reported cases of Starlink debris hitting the ground (they're designed to burn up in atmosphere). There was a case of SpaceX's Dragon parts hitting the ground lately, but that's a different thing. Also debris hitting the ground is a different issue than debris in orbit, with different problems to solve - you can have a satellite that has 0 chance to hit the ground, while being a serious hazard in space. It's in SpaceX best interest to not leave debris in orbit, because any debris from Starlink would be a threat to Starlink itself.
Starlink doesn't have a debris problem thanks to the low orbit. Any debris generated deorbit on time-frames of a few months to a free years. Starlink also has 0 reported Earth debris strikes that I can find.