I love lisps, but isn't using a garbage collected language on a micro a bit bizarre? I'm sure in lots of cases it won't matter, but then you could probably can use some cheap ARM SBC and IO pins (and a full dev environment).
the Venn diagram of situations where a SBC won't do and you don't care about random pauses seems kinda small.. I could be wrong
Lots of people are embracing MicroPython and CircuitPython on microcontrollers these days. The computational capabilities of many of these devices exceeds what a high-end PC could do in the early/mid-90's. Many applications, especially hobbyist projects, are doing relatively mundane and low frequency work like turning things on and off and taking sensor readings at human scale intervals and then going to sleep. So why not?
I'm not experienced in this whole area, so I meant to ask in an open ended way :)
Does uLisp play well with sleep modes and setting up interrupts to come out of sleep and stuff?
I've been wanting to build a lower power weather logger that can run on a battery for a year. While scoping out the difficulty of the project and digging into lower power stuff .. things got very hairy very fast (this was with STM32F1 chips). A GC seems like an added layer of complexity. Like what happens if you have garbage collection during an interrupt handler? Or if your garbage collection is interrupted by something else..? is garbage collection on its own timer/handler that you need to manage?
Or is this built on top of the Arduino main-event-loop based model of programming? In which base it doesn't seem to be the normal interrupt driven thing you'd be looking at for lower power applications.. i think
I run s7 Scheme in audio contexts (with Scheme for Max), which is quite similar (soft-realtime), and I've found the impact of the GC to be much smaller than I expected. I need to run things such that an extra ms of either latency or jitter does the job, because the GC runs are bursty. So running in an audio app, if the i/o buffer is more than few ms (latency) in size (the norm when amount of heavy dsp is going on anyway), the GC runs just happen in that latency gap, and timing is solid. Or alternatively, timing is off by a ms for one pass and correct on the next, either of which are fine for music.
If you care about latency, not performance, I think it's fine? You can do a Cheney semispace collector in which it's safe to allocate in interrupt handlers if you have control over the ABI (and you've got a few registers to spare) or if your instruction set provides an "increment this memory location" instruction.
The overlap might indeed be small outside the hobby space, but I'd like to note that some MCU, e.g. the RP2040 (of Raspberry Pico board) have additional hardware ("PIO") and support for those (in the form of an assembler) in a MicroPython implementation, which allows for some hard real-time applications with latencies in the sub-us range. So with sufficient hardware support, performance and latency in the language used to program the core might not matter.
The PIO's "state-machines" in the RP2040 offer fairly limited functionality, but the principle applies also to more capable hardware like TI's Sitara MCUs and FPGAs with soft-core CPUs.
It wouldn't (likely) be the main way you program a device so much as an interpreter available on the device for interaction.
I last used it on an ARM micro. Building and deploying a new image to flash is substantially slower than using a lisp console over USB serial, and requires me to wire up the programming headers instead of just using the USB line providing power and other services.
the Venn diagram of situations where a SBC won't do and you don't care about random pauses seems kinda small.. I could be wrong