A while ago I have implemented something similar to that in Python, although specifying versions requires using function calls instead of imports. Turns out in Python you can execute arbitrary code during imports via hooks, including calling out to pip to install a dependency.
The article says that Consul or etcd are designed to always be up, but it’s actually quite the opposite. They both leverage Raft for maintaining consensus and thus optimize for consistency at the cost of availability in case of network partitions. See CAP theorem.
There are reasons to distribute DBs that do not need to be up constantly, e.g. distributing work (transactions or queries) across more resources than are available on one machine; or to bring a replica closer to some other service to reduce latency.
Kafka Streams is the first kind; the source-of-truth storage is HA (as HA as the Kafka topics it's backed with at least) but can only be queried with high consistency when the consumer is active, and it goes down for rebalances when you scale out or fail over (and in many operational setups also when you upgrade).
That being said, I think the etcd etc. examples are just meant to be in contrast to stock Redis or Memcache, which offer very little HA support, generally just failover with minimal consistency guarantee.
I guess so, if it's not exactly "one out, one in", that would constantly reorder packets. I think head-drop, like the article says is really the best trivial solution the more I think about it.
Yep, but IP doesn't provide an ordering guarantee at the link level anyway, and higher-level protocols have sequence numbers so they can reassemble things correctly. If those higher-level mechanisms aren't up for the task of handling lots of small reorderings, that would obviously be an issue.
Great work! I did something similar, albeit with less features, for one of my uni classes. If anyone is curious about Rust code: https://github.com/miedzinski/regex.
Okay, something is weird. I have Outlook.com email with custom domain (previously Windows Live Domains). These kind of accounts had problems with Nylas since forever (see my old GitHub issue - https://github.com/nylas/nylas-mail/issues/859). I have tried adding it to new Nylas app with Office 365/custom IMAP with no success.
The weird thing is, in last 15 minutes someone from USA - AWS instance, 35.167.235.231 - tried to connect to my account. This app is said to sync locally, right? But I see no attempts from my IP in Outlook.com panel, but about ten from AWS. All of these started when I installed Nylas and tried to configure it; remaining logins logged by Microsoft are 100% by me (web client, iPhone Mail.app). WTF?
2nd edit: According to https://github.com/nylas/nylas-mail/issues/3405#issuecomment... someone else is unable to connect with GMail because of Nylas servers throwing HTTP 502. Does it sync locally or through Nylas' AWS instances? Is this why I can't configure my account with IMAP?
Is it possible to turn it off? I don't want these, but looks like I have effectively given you password to my email account. Can you remove it from your database?
I just retried several times until it got through. Syncing accounts right now. Only problem is I can't find the "File, Edit..." menu on Ubuntu Budgie 17.04, but rest of the functionality works great so far.
Edit: well, I'm low on coffee today. Just press Alt key to show the menu! Not a nylas problem :)
Of course it's just a recommendation. Why would you think otherwise? To Rust, module name is just a bunch of characters, it doesn't recognize words anyhow.
Copying everything every time is a trivial (and very slow) solution to the memory safety problem. It just means that everything is on the stack (or at least that only one stack frame has a reference to any given object), so it is simply deallocated along with the stack frame. That's it. There's nothing unsafe about it.
What do you mean by saying mutability is for safety? That's a very unusual opinion.
"sharing data" is quite illusory. Even if two threads have a reference to the same object, the CPU deals with it internally by making several copies, asynchronous message passing and locking, and in many cases it can lead to abysmal performance. It is often much better for performance to design parallel algorithms around shared-nothing i.e. local mutability + explicit message passing right from the start.
> "sharing data" is quite illusory. Even if two threads have a reference to the same object, the CPU deals with it internally by making several copies, asynchronous message passing and locking,
No. Two threads on the same CPU core really access the same data(1) without the delay and no locking happens unless the programmer wrote some locking code.
Synchronizing the different cores or processors is another topic, but it's also typically dependent on the software.
Most of the time the implemented techniques do significantly speed up the execution. And it's mostly software design that initiates the slowdowns, not the CPU.
Even on a single core, there are several copies in different cache layers and synchronizing them is done by sending asynchronous messages. Sure, in that one particular edge case when the threads are sharing a core you're right, but this is not a typical scenario for multi-threaded applications. Most of the time for high multi-threaded performance you want exactly opposite - one thread per core and pinning threads to cores. And if you don't do anything, you can never be sure if your threads run on the same core or not and you should assume the worst.
> And it's mostly software design that initiates the slowdowns, not the CPU.
This is quite vague statement and I'm not sure what you really meant here. Software written using a simplified abstraction model (e.g. flat memory with stuff shared between threads, ordered sequential execution) much different than the way how CPU really works (hierarchical memory, out-of-order execution, implicit parallelism etc.) is very likely to cause "magic" slowdowns. See e.g. false-sharing.
Also algorithms designed around the concept of shared mutability do not scale. Sure, you may hide some of the problems with reordering, out-of-order, etc. To some degree it will help, but not when you go to scale of several thousands cores in a geographically distributed system.
It's also an effect of a badly written software, not something that is constantly present in the CPU execution. You based the claim to which I've replied on "sharing is illusory", "if two threads" and "the CPU deals with it" like it's necessary to happen all the time in the CPU as soon as the threads exist and they access the same data.
> when you go to scale of several thousands cores in a geographically distributed system.
There you are not describing "a CPU" (as in, the thing that's in the CPU slot of the motherboard) which is all I discussed, and I'm not interested in changing the topic.
Aren't they closely related? Single-ownership semantics benefit concurrency and optimization by eliminating the need for locks and synchronization except in those (rarer) cases where data explicitly needs to be shared between threads, in which case Rust forces you to go through mutexes. In other languages that don't offer this safety, you're likely to lock a lot more stuff as a purely defensive measure, because the only thing preventing unsafe access is the programmer.
Since the borrow checker happens at compile time, the whole mechanism is "zero cost". This results in more efficient code, because you can do things like safely hand out references (pointers) to privately held pieces of data without needing a heap allocation to track the pointer (e.g. shared_ptr in C++); the final code needs no lifetime checks, because the compiler did all the analysis for you.
I imagine the combination of ownership and immutability also lets the compiler reorder, eliminate and simplify generated code better than most other languages (Haskell being a possible exception here). Not sure if Haskell-style automatic parallelization is planned.
https://github.com/miedzinski/import-pypi