With google covering only 3% I wonder how much people still care and if they should. Funny: I own and know sites that are by far the best resource on the topic but shouldn't have so many links google says. It's like I ask you for a page about cuban chains then you say you don't have it because they had to many links. Or your greengrocer suddenly doesn't have apples because his supplier now offers more than 5 different kinds so he will never buy there again.
> because my prompts are in natural languages, and hence ambiguous.
Legalese developed specifically because natural language was too ambiguous. A similar level of specificity for prompting works wonders
One of the issues with specifying directions to the computer with code is that you are very narrowly describing how something can be done. But sometimes I don't always know the best 'how', I just know what I know. With natural language prompting the AI can tap into its training knowledge and come up with better ways of doing things. It still needs lots of steering (usually) but a lot of times you can end up with a superior result.
Yes. LLMs are search engines into the (latent) space or source code. Stuff you put into the context window is the "query". I've had some good results by minimizing the conversational aspect, and thinking in terms of shaping the context: asking the LLM to analyze relevant files, nor because I want the analysis, but because I want a good reading in the context. LLMs will work hard to stay in that "landscape", even with vague prompts. Often better than with weirdly specific or conflicting instructions.
But search engines are not a good interface when you already know what you want and need to specify it exactly.
See for example the new Windows start menu compared to the old-school run dialog – if I directly run "notepad", then I get always Notepad; but if I search for "notepad" then, after quite a bit of chugging and loading and layout shifting, I might get Notepad or I might get something from Bing or something entirely different at different times.
I have a lot of questions about the person who wrote that blog post, in that it seems to be a quick hot take without any digging into the reasons why things are the way are
Blog first, ask questions later? It's like c'mon man, have at least a little bit of curiosity...
No idea about author's exact age but I would bet he was born around Y2K (according to his CV) and, well, it's IMO a testament that usability is based on habits, culture and conventions, and it's not a universal truth.
All that needed to be conveyed was that there are humans who cannot create new memories. That is enough to pose the philosophical question about these models having intelligence. Anything more is just adding an anecdote that isn't necessary.
lol, as if pointing at a wikipedia article (without any relevant discussion of the contents therein) is some kind of conversational excellence.
Or perhaps you were referring to the impact of the two in that the "sledgehammer" of "they can't make new memories" is a lot more effective than the tiny scalpel of "if you do a wikipedia search this is a single one of the relevant articles"
The extra information is that he is the canonical case which defined our clinical understanding of the condition. Not just a "single relevant article."
I pulled it up because I was familiar with this fact.
That will never happen; their only money-making method is to limit the iOS app to sell their cloud. Otherwise, the desktop is already free with your own vault folder.
They are trying their hardest to prevent users from using Google Drive or other services natively. While it is just a small option to add, it will make everyone drop their $4 cloud subscription.
If that were true Obsidian would not allow third-party sync plugins in the official directory, and wouldn't mention third-party options in the official docs:
I just tried just to see if something changed, and I get this message when selecting Other:
###############
Other Syncing Methods:
Obsidian officially supports two syncing methods: Obsidian Sync and iCloud.
However, because obsidian gives you control over your data there are other sync options you can use.
These options include third-party plugins and other tools which may require more advanced setup.
To use an alternative sync method, create a vault and follow the instructions provided by the plugin or third-party sync provider.
###############
I went ahead and created a vault specifically to test this out, but I wasn't able to find any way to open Google Drive from within the app.
To give some context, I use KeePassium with a KeePass vault stored in my Google Drive, and it works seamlessly, I can browse to the directory and select my database right from the file picker. Unfortunately, that same experience doesn't seem to be available in Obsidian.
I'm a Windows user for work and don't use iCloud for anything, so Google Drive is really my go-to. I've tried multiple times to make it work, but it doesn't appear to be an option. As I understand it, Google Drive isn't natively selectable from Obsidian's iOS menu, and this wouldn't be a trivial thing to add since Google Drive appears as a folder in the native iOS file browser, which is exactly how I use it with other apps like KeePassium.
I've submitted this as a request on GitHub several times, and even mentioned that I'd happily pay a one-time fee to unlock this functionality. I'm not a fan of subscriptions personally, but I do believe in supporting developers for their work.
That said, if there truly is a way to use Google Drive with Obsidian on iOS, I'd genuinely love to see a step-by-step guide, could you share one?
Yeah, I figured as much. They don't work natively on iOS. And honestly, I get it. Sync is a key part of the revenue model, and a simple "select your vault from files" option, while it would solve the problem completely, isn't something that makes business sense to just give away. No judgment there. If I'd built something as great as Obsidian, I'd probably make the same call.
Since my only ask was really for a workaround or a link to a solution, I'm guessing there isn't one available, and the suggestion is more about exploring community plugins?
There's isn't an official one yet, but there are community options, or you could make your own. If you're wondering why this or that feature hasn't been added to Obsidian, remember we only have three full-time developers (and not trying to grow the team).
Fuck this business sense bullshit. If the feature can be built, it should be built. The is business at the expense of product, aka the reason everything sucks so much nowadays
> There is an unsolvable disconnect between what the performer's actions and their audience
Is that really true though? If I watch a cellist play I can pretty clearly see all the things they are doing and it will correlate neatly to the timbre of the sound.
Secondly I think it's important to note the tube amp and the guitar are seperable, and I don't think that their connection is particularly magical. I can reamp a sound from my synthesizer (or maybe a keytar?) into a guitar chain, and if I manipulate the mic and other controls in the same way I might manipulate the pickup, I can also get all manner of interesting feedback effects. My inputs will have different harmonic characteristics of course, and the tube amp's effects are mostly transformations of harmonics; you'll still get some cool tones and they will be subject to a lot of the same rules as if a guitar was being played.
They're talking about electronic instruments there. The comment is about how electronic instruments don't generally match the physical expressiveness of acoustic instruments (like the Cello).
I'm talking about electronic instruments how they are deficient in expressiveness compared to your cello example.
> Secondly I think it's important to note the tube amp and the guitar are seperable, and I don't think that their connection is particularly magical. I can reamp a sound from my synthesizer (or maybe a keytar?) into a guitar chain, and if I manipulate the mic and other controls in the same way I might manipulate the pickup, I can also get all manner of interesting feedback effects.
The story is not quite so simple. Your synthesizer is going to have a buffered output so it wont have the complex impedance loading interactions with the amplifier as the guitar pickup.
This is actually critical to how early distortion effects such as the classic Fuzzface work and imo is essential for the kind of complex timbres you can produce with a guitar + tube amp.
In fact you can take an electric guitar, put a buffer pedal in the chain between your fuzz pedal and amp and completely destroy the ability to produce wild feedback and distortion.
I'm a guitarist, but there's nothing particularly magical about a high impedance signal, other than they tend to lead to noise and make really obnoxious things matter, like how low capacitance your cable is. Also, a TON of modern guitars are low(ish) impedance out because they use active pickups.
The pedals and system being dependent on the high impedance was always a bug, not a feature, and make the setup incredibly dependent on variables that really wouldn't be that hard to just buffer then recreate deterministically. Like, if your pedal should react to that impedance just buffer the front, put a big inductor (or a transformer using only half, or, - and I've actually seen this - just a whole guitar pickup) in the pedal. Then you're not dependent on the pickups of the guitar or the capacitance of cable or anything else and you can make sure the effect sounds good regardless of pickup type.
A Fuzz Face works the way it does because it actually gets affected by the guitar's impedance changing as you work the knobs on the guitar and pick differently. The Fuzz Face has minimal input filtering, the guitar's knobs actually change the bias of the first transistor IIRC and cause massive changes in sound.
If you stick a buffer in front of it that interaction is gone and there is nothing you can stick after the buffer to bring it back. You pretty much have to plug the guitar directly into a Fuzz Face for it to work as intended. There are even constant arguments about putting the Wah in front of the FF or after it. I'm not sure if the article even has it right or whether Hendrix did it differently at different times. Other articles show a different order of the effects.
There are other fuzz circuits that behave differently and work better with buffers and would be more uniform when used with other types of instruments or with electric guitars with active pickups (which are buffered).
E.x. I have a Tone Bender and have had several Fuzzes in the "Big Muff" category along with one that was based on the Fox Tone Machine. The Tone Bender and Big Muff can't clean up at all like the Fuzz Face via the guitar controls, and IIRC the Fox Tone Machine is somewhere in the middle. The Fuzz Face when setup correctly is really quite amazing as you can go crystal clear to crushing fuzz with your volume knob on the guitar. When you've tried it you realize Jimi Hendrix was doing it constantly in an amazing way.
That is going to be something like a transformer to step down your line level signal and some series resistance to match the load to help drive the amp.
An actual coil pickup has reactive impedance that is frequency dependent and will result in a more complex interaction between the devices.
> The pedals and system being dependent on the high impedance was always a bug, not a feature
Sure if you think like an engineer, but everything you are complaining about is what allows someone like Jimi Hendrix to do what he did with a guitar.
they're comparing an electric guitar to electronic instruments, like midi keyboards. An electric cello would be the same thing as an electric guitar in this context.
Eminently separable, but it's good to be aware of the tradeoffs.
Not magic at all, physics.
It's good to understand that high-impedance is not the biggest deal, but one thing about the magnetic pickups that not everybody realizes is the way that plugging directly into a tube (pre)amp basically magnetically couples the strings to the grid of the input tube.
And that grid has no further physical connection to any other components in the circuit, not even within the same tube, except for clouds of electrons and the flow that occurs among the electrodes.
That way your music basically starts out being sprayed through space directly from the strings which create the magnetic signal.
The thing about high-impedance is the way the relatively minuscule resistor values between the amp's input jack and the input grid's tube pin are so insignificant by comparison to the pickup internal impedance, that resistance might as well be zero.
The only reason there is a resistor in between the input jack and the input grid anyway is to accommodate a high-impedance input with better stability under wider conditions than otherwise.
Now you can get a righteous sound with any number of pedals in between the guitar & amp, especially if the battery power is used to boost the signal to more than the guitar puts out magnetically, and it's been the mainstream for so long people almost never consider doing it any other way.
It's just not the same magnetic coupling from the strings to the tube, you can't have both unless it's a tube pedal.
I've designed lots of solid state circuits too and there is plenty of excellence when coupling the same magnetic pickup directly to a silicon or germanium crystal lattice and going from there. Whether it's pedals or a pure solid-state amp. Instead of using any tubes at all.
Also some people prefer having tubes only for the audio output section, coupled to the magnetic speakers through the antique-style audio output transformer the old-fashioned way.
I have MAX and have been using Opus 4.6 heavily for my day job which is 100% agentic programming, and my usage numbers have not changed meaningfully since Opus 4.6 came out
Same here. Both $20 and $100 finished fast. Never hit a limit after dishing out the $200. Explore() sometimes prints 90k token usage which scares me, but so far it is consequence free.
At 8 hours a day 5 days a week I never hit the limit on my $100 MAX plan. People must be running crazy autonomous workflows or something because I'm nowhere near hitting my limit, ever
Each time I've dug into this for someone, it's because they're filling up their context window with a bunch of tokens before any real work even starts.
Highly encourage people having issues to do /context and start removing heavy things. It's usually some sprawling MCPs they rarely use, or huge CLAUDE.md files they generated or cargo-culted from someone else.
I'm not suggesting these are the only ways to hit the limits, it's just (so far) almost always the answer when someone hits the limits doing something that I wouldn't expect to be problematic.
A lot of known crawlers will get a crawler-optimized version of the page
reply