We'll be running an Apache Spark community sprint centered on helping folks get their first PRs into the Apache Spark project on March 13th (Friday) from 12:00-7:00 PM at the Snowflake Bellevue Office :)
I think a world without any form of meaningful protection or regulation sounds very dystopian and not like one I would choose to live in under our current capitalist society.
Even the author of the “theorem” you cite indicated that he didn’t believe it to be practical.
The theory was wonderful (yay! stateless load balancer etc.) but in practice browsers at the time were less than happy with trying to store that much state in the URL.
If you've wanted to add satellite communication to your IoT project but traditional pricing has turned you off, we're kickstarting a new device (using an existing LEO constellation).
This is so true. Hire a bunch of senior engineers to solve junior level problems and they'll turn the junior level problems into senior level problems :p
I'm mostly surprised how developers as a community can both:
- Espouse we really need to care about communication in every form
- Still manage to let a few 'seniors' capable of solving the problem do so, at the cost of destroying most momentum by making the solution incredibly difficult to reason about and leaving next to zero documentation behind.
Then again, when business doesn't give the time to fix this or doublecheck, it's no wonder. And then hiring managers look cross-eyed at developers for wanting to work on green project instead of the next dumpster fire
The philosophy behind MKL is that each CPU vendor provides an MKL for their CPU. If you expect to mix and match MKLs and CPUs, you don’t understand the goals of MKL.
The expectation in the HPC community is that an interested vendor will provide their own BLAS/LAPACK implementation (MKL is a BLAS/LAPACK implementation, along with a bunch of other stuff), which is well-tuned for their hardware. These sort of libraries aren't just tuned for an architecture, they might be tuned for a given generation or even particular SKUs.
I learned about this recently when trying to optimize ML test architecture running on Azure. It turns out having access to Ice Lake chips would allow optimizations that should decrease compute time and therefore cost by 20-30%.
Each vendor. Intel BLAS (MKL) has Intel-specific optimizations and AMD BLAS has AMD-specific optimizations.
Intel is still acting in bad faith by allowing MKL to run in crippled mode on AMD. They should either let it use all available instructions or make it refuse to run.
The latest oneMKL versions have sgemm/dgemm kernels for Zen CPUs that are almost as fast as the AVX2 kernels (that require disabling Intel CPU detection on Zen).
Accelerate and MKL have some overlap (notably BLAS, LAPACK, signal processing libraries and basic vectorized math operations), but each also contains a whole bunch of API that the other lacks. Neither is a subset of the other.
They both contain a sparse matrix library, but exactly what operations are offered is somewhat different between the two.
They both have image processing operations, but fairly different ones. Accelerate has BNNS, MKL has its own set of deep learning interfaces...
In case you or anyone else knows, are there other libraries that implement a high performance sparse QR? Really, I need a Q-less QR factorization for sparse matrices. As far as I know, there are only two: one comes from MKL:
Part of the issue is that SPQR is dual licensed GPL/Commercial and the last time I checked a license was not cheap. Conversely, MKL has no redistribution fee, so it's been essentially the only option for this factorization if the code can't be bundled in a way compatible with the GPL.
Replying to [dead] sibling post from kxyvr: yes, Accelerate provides a Q-less sparse QR on Apple platforms (https://developer.apple.com/documentation/accelerate/sparse_..., in particular SparseFactorizationCholeskyAtA). I believe that MA49 from HSL does it as well, and may have more acceptable licensing than SuiteSparse depending on your situation.