Hacker Newsnew | past | comments | ask | show | jobs | submit | dswilkerson's commentslogin

Happy Thanksgiving everybody!


I will just add a comment on an aspect of using emacs that no one else mentioned: (1) I find that I must bind caps-lock to control, and (2) as far as I can tell, no operating system does this in a way that really works besides OSX. So now I am stuck using OSX because I use emacs. When I use a GNU/Linux machine, I do it by ssh-ing in over the network from an OSX machine. I think you may find this to be something you have to deal with as well.


> as far as I can tell, no operating system does this in a way that really works besides OSX

AFAIK this is an easy setting in desktop environments such as Cosmic, Gnome, and KDE. But I've been using keyd on Linux distros for a while:

https://github.com/rvaiya/keyd#quickstart

Using the config in the above example results in Caps Lock acting as Esc if used on its own or as Ctrl if it's held down.


Not sure if it counts as "really works", but on Windows with PowerToys you can enable Keyboard Manager and 'Remap a key'. (Might want to remap right-Ctrl to CapsLock, in case it turns CapsLock on.) There's also old Registry hacks to do the same thing.


Get a kinesis advantage, you won’t regret it.


> I must bind caps-lock to control, and […] no operating system does this in a way that really works besides OSX.

What? This has worked for me in X11 for at least two years now:

  setxkbmap -model pc104 us -option ctrl:nocaps
(If you still need a Caps Lock key, there’s -option ctrl:swapcaps)


Yes, and that option is in most desktop environments' keyboard configuration dialogs.


I always use "two shift keys activates caps".


Entropy is expected information. That is, given a random variable, if you compute the expected value (the sum of the values weighted by their probability) of the information of an event (the log base 2 of the multiplicative inverse of the probability of the event), you get the formula for entropy.

Here it is explained at length: "An Intuitive Explanation of the Information Entropy of a Random Variable, Or: How to Play Twenty Questions": http://danielwilkerson.com/entropy.html


Fasting puts your cells into a "clean out the junk" mode that is quite powerful for deleting stuff, including cancer. That is, rather that being passive, fasting is quite active and uniquely potent. I had a lump in my throat near my vocal chords that I was told was a standard response to acid reflux and was inoperable. It was there for 15 years and hurt whenever I would sing. I did a 20 day fast last summer and just happened to have it examined and photographed before and after. Before the fast it was there and after it was completely gone.

Valter Longo is one of the world's experts on fasting. You might want to read this article of his on fasting and cancer: https://www.cell.com/trends/endocrinology-metabolism/abstrac...

____

Starvation, Stress Resistance, and Cancer Roberta Buono, Valter D. Longo

Dysregulated metabolism is one of the emerging hallmarks of cancer cells. Differential stress resistance (DSR) and differential stress sensitization (DSS) responses are the mechanisms caused by fasting and fasting-mimicking diet (FMDs) to promote protection of normal cells and induce cancer cell death. Fasting-dependent reduction in glucose and IGF-1 mediates part of the DSR and DSS effects. Fasting and FMDs have the potential for applications in both cancer prevention and treatment.

____

Either Longo or another fasting researcher pointed out that you can make a level of chemotherapy where none of the rats that are not fasting live and where all of the rats that are fasting live. So fasting is a powerful alteration of cells that makes them tolerate chemotherapy much better.

You might want to contact Alan Goldhamer of TrueNorth Health Center. They have almost four decades of experience getting fantastic results by fasting people (about 20K so far), such as curing cancers, Lupus, Diabetes, etc. See https://youtu.be/42QAyVkAS_0?t=71 or this https://www.youtube.com/watch?v=xuebTcdLIKY

The below is from a friend of mine who an M.D. told me has read so much about biomedicine that "it's as if he went to graduate school":

____

In complete contrast to chemotherapy, fasting helps pain, anxiety and depression - http://www.mindthesciencegap.org/2013/04/10/fasting-for-ment....

For general information on fasting, I recommend reading or watching Dr. Jason Fung. He is a nephrologist from Canada. His book The Obesity Code (I have read it) is selling well, but you can get the same information by watching YouTube videos, which I preferred to his book. My favorite were his early lectures that are less flashy “The Aetiology of Obesity Part 1 of 6: A New Hope” https://www.youtube.com/watch?v=YpllomiDMX0 However, if a six hour graduate lecture series is more than you want to sign on for, any of the more recent videos at www.dietdoctor.com will provide the basics.

In addition to Dr. Fung, a number of doctors are publishing articles and videos about fasting and cancer:

* Dr. Fung quoting Noble winner for autophagy - https://www.dietdoctor.com/fasting-cellular-cleansing-cancer... & https://www.dietdoctor.com/attacking-cancers-weakness-not-st...

* Dr. Seyfried - https://www.youtube.com/watch?v=SEE-oU8_NSU - he wrote a book (https://www.amazon.com/Cancer-Metabolic-Disease-Management-P...) that I have not purchased this book, but it is highly regarded and referenced by others.

* Dr. Winters - https://www.dietdoctor.com/member/presentations/winters – This is a discussion of the metabolic approach to cancer

* Dr. Poff - https://www.dietdoctor.com/can-you-treat-cancer-with-low-car... - Keto diet and cancer

Some of this is very biochemistry based and is just tons of detail saying “fasting and/or a ketogenic diet will fight cancer.” Spending the time to understand the biochemistry of the disease and visualizing what you want your body to do will help your body heal. While this sounds very touchy, feely and like voodoo medicine to a traditionally trained biochemist, the research is strong on the ability of the mental imagery to have a therapeutic benefit. (Again, I cite Dr. Rosenthal, neuroscientist, as a higher authority).

____


Math major here: this is wrong. The expression 1/0 is NOT A NUMBER, even if you allow positive infinity or negative infinity. In particular, it is most certainly not 0.

Note that infinity would be a fine answer IF MATHEMATICS COULD BE CONSISTENTLY EXTENDED to define it to be so, but this cannot be done (see below). Note that using infinity does not "break" mathematics (as some have suggested below) otherwise mathematicians would not use infinity at all.

If we have an expression that is not a number, such as 1/0, you can sometimes consistently define it to be something, such as a number or positive infinity or negative infinity, IF THAT WOULD BE CONSISTENT with the rest of mathematics. Let's see an example of the standard means of getting a consistent definition of exponentiation starting with its definition on positive integers and extending eventually to a definition for on a much bigger set, the rationals (ratios of signed integers).

We define 2 ^ N (exponentiation, "two raised to the power of N") for N a positive integer to be 2 multiplied by itself N times. For example: 2 ^ 1 = 2; 2 ^ 2 = 4; 2 ^ 3 = 8.

Ok, what is 2 ^ N where N is a negative integer? Well we did not define it, so it is nothing. However there is a way to CONSISTENTLY EXTEND the definition to include negative exponents: just define it to preserve the algebraic properties of exponentiation.

For exponents we have: (2 ^ A) * (2 ^ B) ("two raised to the power of A times two raised to the power of B") = 2 ^ (A+B) ("two raised to the power of A plus B"). That is, when you multiply, the exponents add. You can spot check it: (2 ^ 2) * (2 ^ 3) = 4 * 8 = 32 = 2 ^ 5 = 2 ^ (2 + 3).

So we can EXTEND THE DEFINITION of exponentiation to define 2 ^ -N for positive integer N (so a negative integer exponent) to be something that would BE CONSISTENT WITH the algebraic property above as follows. Define 2 ^ -N ("two raised to the power of negative N") to be (1/2) ^ N ("one half raised to the power N"). Check: (2 ^ -1) * (2 ^ 2) = ((1/2) ^ 1) * (2 ^ 2) = 1/2 * 4 = 2 = 2 ^ 1 = 2 ^ (-1 + 2).

Ok, what is 2 ^ 0 ("two raised to the power of zero")? Again, we have not defined it, so it is nothing. However, again, we can CONSISTENTLY EXTEND the definition of exponentiation to give it a value. 2 ^ 0 = (2 ^ -1) * (2 ^ 1) = 1/2 * 2 = 1. This always works out no matter how you look at it. So we say 2 ^ 0 = 1.

I struggled with this for days when I was a kid, literally yelling in disbelief at my parents until the would run away from me. I mean 2 ^ 0 means multiplying 2 times itself 0 times, which means doing nothing, so I thought it should be 0. After 3 days I finally realized that doing nothing IN THE CONTEXT OF MULTIPLICATION is multiplying by ONE, not multiplying by zero, so 2 ^ 0 should be 1.

Ok, is there a way to CONSISTENTLY EXTEND the definition of exponentiation to include non-integer exponents? Yes, we can define 2 ^ X for X = P / Q, where P and Q are integers (a "rational number"), to be 2 ^ (P/Q) = (2 ^ P) * (2 ^ -Q). All the properties of exponentials work out.

Notice how we can keep EXTENDING the definition of exponentiation starting from positive integers, to integers, to rationals, as long as we do so CONSISTENT with the properties of the previous definition of exponentials. I will not do go into the details, but we can CONSISTENTLY EXTEND the definition of exponentiation to real numbers by taking limits. For example, we can have a consistent definition of 2 ^ pi ("two raised to the power of pi") by taking the limit of 2 ^ (P/Q) as P/Q approaches pi.

HOWEVER, IN CONTRAST to the above extension of the definition of exponentiation, there is NO SUCH SIMILAR CONSISTENT EXTENSION to division that allows us to define 1/0 as ANY NUMBER AT ALL, even if we allow extending to include positive infinity and negative infinity.

The limit of 1/x as x goes to zero FROM THE POSITIVE DIRECTION = positive infinity. Some example points of this sequence: 1/1 = 1; 1/0.5 = 2; 1/0.1 = 10; 1/0.01 = 100, etc. As you can see the limit is going to positive infinity.

However, the limit of 1/x as x goes to zero FROM THE NEGATIVE DIRECTION = NEGATIVE infinity. Some example points from this sequence: 1/-1 = -1; 1/-0.5 = -2; 1/-0.1 = -10; 1/-0.01 = -100, etc. As you can see the limit is going to NEGATIVE infinity.

Therefore, since positive infinity does not equal negative infinity, there is NO DEFINITION of 1/0 that is consistent with BOTH of these limits at the same time. The expression 1/0 is NOT A NUMBER, even if you include positive and negative infinity, and mathematics cannot be consistently extended to make it into a number. Q.E.D.


Delta debugging is not new: https://en.wikipedia.org/wiki/Delta_debugging

My implementation of delta debugging, "delta" is 19+ years old: https://github.com/dsw/delta

I released it as Open Source because Microsoft Research sent someone to my office to ask me to, back when Microsoft was calling Open Source a "cancer".

The LLVM introduction by Latner refers to the "standard delta debugging tool", so it is rather well-known: https://aosabook.org/en/v1/llvm.html 'unlike the standard "delta" command line tool.'


For other readers' benefit: C-Reduce is a little more sophisticated than plain delta-debugging. From the abstract of Test-Case Reduction for C Compiler Bugs (2012):

> [...] [C-Reduce] produces outputs that are, on average, more than 25 times smaller than those produced by our other reducers or by the existing reducer that is most commonly used by compiler developers. We conclude that effective program reduction requires more than straightforward delta debugging.

(Of course, this means that C-Reduce is 12 years old now.)

At the same time, C-Reduce seems to be more general than the LLVM tool you linked ("BugPoint", dating to 2002), since that one works with LLVM IR specifically.

I think most developers are generally unfamiliar with automatic test case minimization tools and techniques, so the post may be helpful even if the ideas have been known in their respective circles for quite some time.


"Show me your flowcharts [code], and conceal your tables [schema], and I shall continue to be mystified; show me your tables [schema] and I won't usually need your flowcharts [code]: they'll be obvious." -- Fred Brooks, "The Mythical Man Month", ch 9.


I once wrote to John Carmack as a Quake-obsessed kid, asking for any advice he has for an aspiring programmer and if he had any favourite books. To my surprise he wrote back a really thoughtful response, including the following:

"Read The Mythical Man Month. I remember thinking that a book that old can't say anything relevant about software development today, but I was wrong."


I came here to share this quote because it's so true.

Except when the effort to change the database schema becomes significantly greater than the effort to change the code, and then application developers start abusing the database because it's faster and they have things to do.


No. If you want a deeper understanding of programming, write your own static analysis / theorem prover.


I used to be a teaching assistant for CS 61A (intro to programming) at Berkeley teaching from this book with Brian as the instructor.

One of Brian's primary points is the following:

> Scheme ... has a very simple, uniform notation for everything. Other languages have one notation for variable assignment, another notation for conditional execution, two or three more for looping, and yet another for function calls. Courses that teach those languages spend at least half their time just on learning the notation. In my SICP-based course at Berkeley, we spend the first hour on notation and that's all we need; for the rest of the semester we're learning ideas, not syntax.

Bullshit. Again, I was a TA for this course. You do not spend the rest of the semester on ideas, you spend the rest of the semester on the students being very confused.

This "everything looks the same" property of Scheme and of all LISP-like languages is a bug, not a feature. When the semantics is different, humans need the syntax to be different. In contrast, LISP/Scheme make everything look the same. It is quite hard to even tell a noun from a verb. This makes learning it and teaching it hard, not easy.

Brian is selling a fantasy here. If you think Scheme is so great, look at this nightmare of examples showing the various ways to implement the factorial function in Scheme: https://erkin.party/blog/200715/evolution/

All of this "abstractions first, reality second" agenda is just a special case of what I call "The Pathology of the Modern": the pathological worship of the abstract over the concrete. Everything modernism touches turns into shit. I am done with living in modernist shit and I hope you are too.


I wouldn't have spoken up except for this comment. As a freshman, I took 6.001, the MIT course that Structure and Interpretation of Computer Programs was based on, and I loved it. As a graduate student, I taught 6.001 three times, twice as head TA under Prof. Sussman and once under Prof. Abelson. In addition to helping make up problem sets, quizzes, and exams, my responsibilities included teaching seven or eight one-hour, five-student tutorial sessions per week as well as teaching a forty-student recitation section once per semester. I graded assignments as well. My point is that I have a lot of experience with what students found challenging in the course.

Prof. Harvey's claim rings completely true to me. Students understood the syntax quickly, and spent little time on it. It was not a point of frequent confusion. There were plenty of difficult concepts in the course, but the details of the programming language were not, for most students, among them.

Students who already had programming experience when they started the course often had more trouble than inexperienced students, but mostly because they had to unlearn imperative habits since the imperative features of the language, except for I/O, weren't used until late in the course.

SICP covers a huge breadth of material, from basic computational ideas to algorithms and data structures to interpreters and compilers to query languages to concurrency, and does it in an entertaining and challenging way. Even decades later, I find myself pulling ideas from it in my daily programming work.

I worked at Google for almost twelve years, and I can't count the times I found myself muttering, when reading a design document, "I wish this person had read SICP."

I'm certainly biased, but I would encourage anyone who would like to become a better software engineer to read SICP and study it carefully. Take your time with it, but do read it.


I've had an introductory Scheme course in a smaller university, and have experience designing data structures, creating parsers & interpreters, and with multi-threading and networking.

I was never one to really dig lisp. I prefer the structure and the groundedness of a statically typed systems language (I mostly do systems work). But I took on reading SICP in the hope of finding something new and interesting, and to level up my skills. However, I got bored by the it. Probably made it through more than half of the book.

It's a bummer because I'm left with the feeling of missing out. Am I not worthy or too obtuse to get what's so great about the book? Or maybe I am in fact not the target audience, having too much practical experience that the book doesn't seem worth my while.


If you're comfortable writing interpreters you've probably already picked up on most of the "big ideas" SICP is trying to teach. It's a very good introductory book, but it is still just an introduction.


Honestly GP is making two very valid points though.

Something that Clojure does is differentiating between () = lists of calls, [] = vectors (the go to sequential data structure), {} = maps. This definitely helps the eye to see more structure at a cursory glance. It has a little bit more syntax compared to Scheme, but the tradeoff seems to be worthwhile.

Secondly, I think it's very healthy to be wary of indirection and abstraction. I'm not sure if I agree with the tone and generalization about modernism, but I think there's a burden of proof, so speak, when it comes to adding abstractions, especially in the long term.


I think Scheme works well for the kind of conceptual overview the course is trying to provide. I think there is something to the argument that Scheme syntax is not ideal for readability of larger programs, but I would wager that the bigger reason some students find SICP confusing is the same reason it blows others’ minds - the whole approach is at a higher level of abstraction than most “intro to programming” classes.


Yes, I agree these are two good points. I also experienced teaching SICP and would say the overall position of the GP is incorrect and results in a less profound understanding of programming.


> Take your time with it, but do read it.

And do the harder exercises. Really do them, not just read and tell yourself you understand how to do that one and move on.


Thanks for speaking up. At this point no one is really presenting any evidence so it’s a necessary evil to offset the Lisp slander even if it is, like the parent comment, not much more than an appeal to authority / popularity.

Syntax is absolutely neither natural nor unnatural, by nature, to humans, but it’s a fact that fewer symbols to memorize is easier than more symbols to memorize. The problem is a failure to launch. Some people never truly understand that it’s not just syntax, it’s semantics. Data is code, code is data. That’s why it all looks the same. This artificial distinction in “C-like languages” is more harmful for the enlightened programmer than it is helpful. Unfortunately not everyone that reads SICP experiences enlightenment the first time (or ever, I guess?)


Information hierarchies are empirically important and are an essential part of communications design. Uniform syntax makes information hierarchies harder to parse, because the boundaries around different types of information all look the same. It's the same reason we have different sized headings, bold text, etc. They are distinct markers.

So yes, fewer symbols means easier memorization, but you could take that to the extreme and you'll find that binary is harder to read than assembly.

I think Lisp is really elegant, and the power to treat a program as a data structure is very cool. But scanning Lisp programs visually always takes me a little more effort than most other languages.


My impression has been that people complaining about Lisp's parentheses are complaining about them because they are the most obvious difference between Lisp and other languages, but that they're not what is actually causing them problems. It's the functional approach, where everything is in some sense just algebra, that really throws people off. Of course I can't see inside people's minds, but whenever I discuss this with someone for long enough, that's the impression I get.

Parentheses are just a scapegoat.


People complaining about Lisp parentheses mainly just trolling, not actually working with any kind of Lisp dialect at all.

Traditional Lisps are not functional, but multi-paradigm.

Working with lists is functional though, in that operations that build larger lists out of smaller lists or atoms return a value that you must capture. You don't create an empty list with a persistent identity, which you treat as a bag. New programmers are encouraged to write "pure Lisp", which is a term that denotes list manipulation which treats cons cells as immutable (or any other objects you happen to be using, but mainly those).

Javascript treats character strings similarly the way traditional pure Lisp treats lists. You cannot mutate an existing string to add characters to it, but perform arithmetic on strings to produce new strings. Yet that doesn't prevent the adoption of Javascript. People are cheerfully doing text processing in Javascript in website after website after web application.

The most popular Lisp currently is supposedly Clojure and it is much more doggedly functional than traditional Lisps like Scheme and Common Lisp.

Nope; the parentheses thing is just pure trolling by mainly non-users.

Anyone who actually uses some kind of Lisp could easily write comments that target true weaknesses.

I suspect there is a group out there who has genuine problems with the parentheses, due to cognitive problems like dyslexia and ADHD and whatever. However, I don't see how they can do well with any programming language syntax. Show me what you do use, and how far you've gone with it before I can take you seriously about the parentheses.


I really want to like that idea you're describing, however I've found in practice, there absolutely is a practical difference between code, data and types. I mean, they literally live in different sections of a process. If you design a program that runs on a real machine, and you spend a lot of time thinking what the program should do, how it can put the limited resources of the system to good use -- you absolutely need to think about code and data separately. Mostly think about data, really.

The one area where "code is data" remains a nice idea in my mind is for metaprogramming. And whenever I've done more metaprogramming than small doses, I've come to regret it later, no matter what the language was. (Small doses of metadata can be done even in statically typed, AOT compiled languages without RTTI).

The reason is I think, just basic data structures and simple procedures built in to a language allow you to express most everything you need, in a very direct manner. The number of distinct concepts you come up with as a programmer can usually be directly defined in the base language. Metaprogramms won't create new concepts as such, it's only code run in a different phase. There is definitely a case for generic/templated data structures but it seems it's best to use them sparingly and judiciously. Be wary of them duplicating a lot of code, fatting up and slowing down your system at compile time and/or runtime.


Upvoted for an interesting take, even though I disagree with some of it.

I took 61A from bh. Personally, I agree with bh's statement that you quoted. Where I encountered difficulty was applying the ideas in a different context (e.g. C or Java). Brian spent time addressing this precise difficulty (in the last lecture or so), but it still wasn't enough for me.

I do heartily agree with you calling out "the pathological worship of the abstract over the concrete". Knuth's Concrete Mathematics was also bucking this trend (e.g. https://youtu.be/GmpxxC5tBck?si=tRHQmuA4a-Hapogq&t=78). I'm curious, once you came to this opinion/realization, how did your teaching/learning change?


Just an anecdote.

I took CS61A by Brian Harvey in 2009. I loved the course and I actually spent very little time learning the syntax and most of the time learning the concepts.

So I fully agree with Prof. Brian Harvey here.


Does it beat OO Java classes with students crying ? or months spent on warning kids not to mutate an iterator or else you're gonna cry again ?

Mostly kidding but different paradigms bear different pain points it seems.

Oh and lastly, the let-us-care-not-about-syntax is also an argument at Brown edu (krishnamurti and his team IIRC)

That said, I'd be curious to hear what your students had to say about scheme confusing traits.


I taught both SICP and Java, and I can confirm Java was far more confusing to students. Classes vs instances, inheritance, polymorphism. Why was everything a class? Don't I just want the computer to do something to some input?


And the public static void main and then endless conversations about packages, and public/private fields, that will backfire very pragmatically (at the time) unit test frameworks didn't have a way to call private methods ... Ironically by the time you're done with the basics, nobody has stamina anymore to learn anonymous inner classes.

The thing is, somehow syntax and some forms of abstractions cast a magic spell on most of the population (at time myself included) .. it's your mental interface to the semantics, so you want more syntax to be able to have more abilities, but syntax composes badly.

At least to me that's why I enjoyed lisps / lambda calc, it reduces the domain into a more homogeneous space, suddenly more things are possible with less. Although it seems that the mainstream rather enjoys doing simple thing with verbose tools (it does looks like you're doing a lot of work with a lot of advanced terminology) than solving hard problems with APL oneliners (hyperbole).

Different psychologies ?


I don’t think OO should be taught to students who aren’t already familiar with structs and passing functions around.

If those two things are already well-understood, the nature of OO as a some syntactical sugar and a couple lookup tables is readily apparent.

Without that background, the terminology seems weird and arbitrary and the behavior magical.


You could also portray this as yet another case of theory trumping practice, which is also symptomatic of modernism.

The idea that a language based on a small, elegant set of composable primitives is inherently better for programming in the large as well has not been borne out in practice.


I won't dispute your experience, but for me the point did hold true. SICP was my first introduction to anything Lisp-like, and by that point I'd done C/C++, a bit of Java, quite a bit of Perl/Python, and of course BASIC.

And I was really surprised how quickly and effortlessly I picked up the part of Scheme taught in the book. Faster than any language I had encountered thus far - Python included.


A funny thing to me is I took CS 50 in '85 or '86 (CS 50 and CS 55 were later split into the CS61X series), and instead of Scheme we used Logo for functional programming.... using Brian Harvey's textbook. He was not teaching the course that semester.

At least part of the goal of CS 50 at that time was to explicitly weed students. They didn't want undeclared students to waste a whole lot of time on CS only to find out they were not going to be accepted into CS. Instead, they went through one hard course to find out. Perhaps that explains why some of it was overwhelming to some students?


> Bullshit. Again, I was a TA for this course. You do not spend the rest of the semester on ideas, you spend the rest of the semester on the students being very confused.

I was a TA on an SICP course at a UK university, disagree with you. The students weren't confused, the simple syntax really helped and, because all the students had good maths knowledge, a functional style was a lot more intuitive than imperative.

FYI, the course has since been replaced with Python programming.


> It is quite hard to even tell a noun from a verb

What?

Unless the list is quoted or something, the first item after the opening paren is always the "verb", yes?


There's nothing stopping any other item from being a verb, no? (Not the verb, but a verb.) Anything involving higher order functions?


In the context of the verb, everything else is a noun. When you understand what the verb does, then you can care about the difference between a verb and a noun.


Certainly, but the original quote was "It is quite hard to even tell a noun from a verb" (emph. added), and this is correct, you can't tell whether an identifier refers to a function or variable in Scheme by sight alone. This seems desirable if one wants first-class functions, and is very much an intentional choice for Scheme, but it can admittedly be more difficult to build up a mental model of new code if you have no idea what's a variable and what's a function (esp. amidst an already-difficult-to-grok sea of parentheses).

Notably, this isn't intrinsic to Lisps - Common Lisp uses a different syntax and namespace for function names and variables. My understanding is that Scheme et al's decision to merge the namespaces/syntax was not without controversy in the Lisp community (the Lisp-1 v Lisp-2 debate).[0]

[0] http://www.nhplace.com/kent/Papers/Technical-Issues.html


The only "verb" is the open paren. Other languages just make this simple and fundamental rule way more complicated.


> you can't tell whether an identifier refers to a function or variable in Scheme by sight alone

Nor in C. Nor in JavaScript. Nor in Java. Nor in...

I mean, what is "foo"? Could be the name of a function. Could be a char variable. Could be a double precision float. Could be a pointer to an array of pointers to functions returning doubles. Without going back to its definition (or prototype, for function arguments) you can't tell, much the same as you can't tell in Scheme without looking for the matching define or set!

I feel like I must be missing something here. What?


> I feel like I must be missing something here. What?

If I were to hazard a guess at what the original poster was getting at, it might be the culture of those languages, combined with the power of Lisp to redefine its own syntax.

Lispers value concision, love higher-order functions, and love wrapping things in other things to reuse code, so you might easily see a non-trivial stretch of code without a single function call you recognise. Imagine code where the smallest chunk look something like (dq red foo '(hat n (ddl m) f)). There could be anywhere between zero and eight functions in that snippet, or any one of those might be a macro which re-orders the others in any way (or perhaps its parents include a macro, in which case you really can't assume anything about how / if this stretch is executed at all), it could be a wrapper around something that in other languages would need to be an operator (perhaps it's an if statement?), etc etc.

It's absolutely true you can shoot yourself in the foot in any language, but Lisp is unusually good for it. It's part of its power, but that power comes with a cost. Imagine talking with someone that had a proclivity for making up words. In small doses, this might be fun and save time. In larger doses, you begin losing the thread of the conversation. Lisp is sorta like that. It might seem flammorous, but before you prac it grombles, and you plink trooble blamador!


> Imagine talking with someone that had a proclivity for making up words.

All software is written by making up new words. The bigger the software, the more words.

> you can shoot yourself in the foot in any language, but Lisp is unusually good for it

I've never shot myself in the foot writing Lisp, and have not heard any anecdotes about it. (Well, maybe the one about Cycorp's Cyc decades old code base being large and inscrutable.)

You're making shit up.


> I've never shot myself in the foot writing Lisp, and have not heard any anecdotes about it. (Well, maybe the one about Cycorp's Cyc decades old code base being large and inscrutable.)

> You're making shit up.

An unnecessarily abrasive way of saying you disagree, no? Your own lived experience doesn't match mine, and therefore I must be lying? You're being irrational and mean spirited.

Lisp can't at the same time be uniquely powerful, but also no different to any other language. Lisp is a uniquely flexible language, which is one of its main strengths. Uniquely flexible languages impose a cost for readability and collaboration. You're free to disagree and insult me further, but I think this is self-apparent. Lisp's flexibility makes it a great lone wolf language (well, if you neither want access to a majority of libraries nor closeness to bare metal, which is a bit of an odd middle ground for a lone wolf), but it's awkward in organisations and collaborative contexts, where other, less flexible languages have generally overtaken it.


> Lisp can't at the same time be uniquely powerful, but also no different to any other language

There are lots of programming languages which are "uniquely powerful": C++, Prolog, Haskell, ...

> Lisp is a uniquely flexible language

I'm not sure if I buy "uniquely", but "very" would be fine.

> Uniquely flexible languages impose a cost for readability and collaboration.

At the same time it provides also important features for readability and collaboration. There are code bases of complex Lisp software, which are maintained by small&changing teams for several decades.

Lisp is effective not so much for "lone wolfs", but for small teams (5 to 100 people) working in a shared infrastructure with larger groups. Example: SBCL is a complex Common Lisp implementation, which goes back to the early 80s (-> Spice Lisp). SBCL is maintained by a group of people and has monthly releases. Around it there is an eco-system of software.

Simpler Lisp dialects can also be effective for larger groups. For example there are many people using "AutoLisp" (or versions of it), a simple Lisp dialect for scripting AutoCAD (and various competitors).


Fair thoughts, all. I'm a big fan of Haskell, but I'm not without sympathy to Lisp, even if my own experience of the latter has been somewhat bumpy.

I'm curious, what are some of the important features for readability and collaboration that you mention Lisp offers?


Assuming Common Lisp. Many features found their way into other languages (or were provided there early, too -> for example named arguments in Smalltalk). Thus some may not look novel, but a practically used since several decades and are well integrated into the language, tools and designed for interactive usage: development, coding, extending and also reading code can be done in parallel while using the software.

It's actually very different to 'read source code and use batch compilation', from 'interactively exploring the source code and the running program at the same time'.

Relatively typical is the preference for long and descriptive names in larger software bases, with lots of documentation strings and named arguments.

* Development environments come with many introspection capabilities: describe, documentation, inspect, break, ...

* There are standard features for built in documentation strings for functions, variables, macros, classes, ...

* Macros allow very descriptive code. One can extended the language such that the constructs are very descriptive and declarative.

* Macros allow embedded domain specific code, which makes the code very readable, and gets rid of unnecessary programming details.

* Symbols can get arbitrary long and can contain arbitrary identifiers.

* Functions often have named parameters. Source code typical makes extensive use of named parameters.

* Details like manual memory management are not needed. -> code is simplified

* Many language constructs have an explicit and tight scope. -> for examples variables can't be introduced in arbitrary places in a scope.

* The language standard is very stable.

* Language extension is built-in (macros, reader, meta-object protocol, ...) and everyone uses the same mechanisms, with full language support in the extensions. -> no need tof additional and external macro processors, templating engine, XML engines, ...

* Users can more easily share/improve/collect deep language extensions, without the need to hack specific compiler implementation details, since the extension language is Lisp itself.

* Typical code is not using short identifiers or one letter identifiers with a complex operator hierarchy.

* Development is typically interactive, where one loads a program into Lisp and then one can query the Lisp system about the software (edit, who-calls, graph classes, show documentation, ...). Thus the developer does not work only with text, but can live interact and inspect the software, which is always in a debug mode.

* The code can contain examples and tests, which can be immediately tried out by a programmer while reading the code.

* There is a standardized language with widely different implementations. For collaboration it is can be very helpful that even then much of the core code can be shared, instead of having to reinvent the wheel for those different environments. The Lisp code can query the runtime and adapt itself to the implementation. Other systems have that too with an extra external configuration tools. Often it is possible for a different user that shipped source changes can be loaded into a running software. It is then immediately active and information about argument lists, documentation, class hierarchies, etc. is instantly updated.

Here is an example for a interactive definition of a function with documentation, type declarations and named arguments.

    CL-USER 12 > (defun some-example-for-hackernews (&key author to title text)

                  (declare (type symbol author to)
                           (type list text))

                  "This code is an example for Hackernews, to show off readability features."

                  (print (list author 'writes 'to to))
                  (print (list 'title 'is title))
                  (print text)

                  (values))
    SOME-EXAMPLE-FOR-HACKERNEWS

    CL-USER 13 > (some-example-for-hackernews
                  :author 'lispm
                  :to 'troad
                  :title 'lisp-features
                  :text '("example for a function with documentation, type declaration and named arguments"))

    (LISPM WRITES TO TROAD) 
    (TITLE IS LISP-FEATURES) 
    ("example for a function with documentation, type declaration and named arguments") 

    CL-USER 14 > (documentation 'some-example-for-hackernews 'function)
    "This code is an example for Hackernews, to show off readability features."
Another example: DEFCLASS is a macro for defining classes. Again, documentation and introspection is built-in. The developer does not need to read and work with dead text, but can interactively explore and try out the software, while using self-documentation features. As one can see the macro uses similar named argument lists as functions. There is a slot named WARP-CLASS and arguments for types, initialization arguments, documentation, and so on. The macro then expand this form to larger code and saves the user a lot of typing. The language can use similar mechanisms to be extend with other features, without the need to go into compiler hacking. Thus language extensions can be written and documented by users in a standard way, which greatly enhances the way how to use and understand language extensions.

    CL-USER 31 > (defclass space-ship ()

                   ((name :type 'string :initarg :name :documentation "The space ship name")
                    (warp-class :type 'number :initarg :warp-class :documentation "The warp class describes the generation of the warp propulsion system. 1 is the slowest and 5 is the fastest")
                    (warp-speed :type 'number :initform 0 :documentation "The current warp speed"))

                   (:documentation "this class describes space ships with warp propulsion"))
    #<STANDARD-CLASS SPACE-SHIP 8220381C2B>

    CL-USER 32 > (make-instance 'space-ship
                                :name "Gondor"
                                :warp-class 3)
    #<SPACE-SHIP 8010170AE3>

    CL-USER 33 > (describe *)

    #<SPACE-SHIP 8010170AE3> is a SPACE-SHIP
    NAME            "Gondor"
    WARP-CLASS      3
    WARP-SPEED      0

    CL-USER 34 > (documentation 'space-ship 'type)
    "this class describes space ships with warp propulsion"


Many thanks for taking the time to show those features off, it's very kind of you and I genuinely appreciate it. I spent about two hours playing with SBCL and its documentation / inspection features, inspired by the examples you gave, and another hour reading some docs. Very neat! Aside from reading a book on CL some time back, my most significant (but still quite peripheral) experience with Lisps has been Clojure, and while I feel like Clojure has better onboarding, I must say that CL feels much more pleasant to actually work with. (If I never see another Java stack trace, it will be too soon.)

I do very much like the named and typed arguments. I took the liberty to do some further reading about SBCL's capacity for compile-time type checks [0], which is a pleasant surprise. I did some quick experimenting, and was also quite impressed with SBCL for catching function calls passing unknown keys at compile time, before the call is invoked.

Perhaps the fact that many Lisp guides feel compelled to start with a terse implementation of lambda calculus might actually be somewhat of a disservice, in hiding the more practical side of the language?

[0] https://lispcookbook.github.io/cl-cookbook/type.html


> Many thanks for taking the time to show those features off, it's very kind of you and I genuinely appreciate it.

:-)

> was also quite impressed with SBCL for catching function calls passing unknown keys at compile time, before the call is invoked.

Generally CL compilers tend to check argument lists at compile time. Number of args, correct keyword arguments, ...

SBCL is especially good, due to its further support of declarations as assertions and its support for various compile time checks. You'll also get Lisp backtraces in a natively compiled Lisp then as a bonus. Also for newcomers it is quite helpful, because SBCL gives a lot of warnings and other feedback for various possible problems (from undeclared identifiers, unused variables up to missing optimization opportunities).

> Perhaps the fact that many Lisp guides feel compelled to start with a terse implementation of lambda calculus might actually be somewhat of a disservice, in hiding the more practical side of the language?

That's true. Lisp was often used in education as a vehicle to learn things like lambda calculus (or similar). Practical programming or "software engineering" with Lisp wasn't part of those courses.

There are books which cover those topics, too. Like "Practical Common Lisp" by Peter Seibel, "Paradigms of AI Programming" from Peter Norvig or "Common Lisp Recipes" by Edi Weitz.

For SBCL one definitely needs to read the manual to get an idea about its extended features.


Thanks again! :) I’ll do some reading, maybe pencil in a simple little project for fun and experience. (A nice little MUD/MOO server, perhaps.)


> awkward in organisations and collaborative contexts

Name three concrete anecdotes with organization names, projects and timelines.

Any language can be a "lone wolf" language. People have collaborated in making very large, well-documented projects in C. They have also made things like this:

https://www.ioccc.org/2001/herrmann2.c

a random-dot-stereogram-generating program whose source is a random-dot stereogram.

A language that doesn't let you be a Lone Wolf if you are so inclined is something that is not designed for grown ups, and not worth using if it has any alternatives at all in its space.


Relatively banal point re Turing complete languages. You could run a space program with bash scripts if you really wanted to, doesn't make the statement "Bash scripts are not generally well suited for running a space program" less true or meaningful.

I'm honestly unsure what the point of this exchange is. Your response style seems to be to pick one sentence, seemingly at random, and launch a hyperbolic and extremely abrasive tirade against it. Which is both unpleasant and unlikely to lead to any meaningful exchange of perspectives or ideas.


> Bash scripts are not generally well suited for running a space program

I completely agree. This may be more of an area that finds you on sure footing.

> What the point of this exchange is

I identify with Lisp, and take the trolling personally.


Ah, so the issue is that you misperceive my genuine reflections to be trolling, which you take for permission to be unkind. Whereas from my perspective, I'm just sharing my reflections about something of interest to me, and find myself somewhat abruptly insulted.

Perhaps you ought identify less with your tools, you'd find yourself feeling less attacked when they're discussed (and attacking others?). There's an alternative version of this exchange where you contribute your Lisp knowledge in good faith and I benefit from your thoughts. Bit late now, but food for thought.


A synopsis of your "reflections" is that Lisp languages have nothing to offer of advantage, except to oddballs who follow weird practices that are incompatible with collaboration and long-term maintenance.

That's a baseless, misinformed attack on Lisp people, such as myself; if many people read and believe that, it becomes economically harmful.

Almost every capability in any Lisp dialect can be used responsibly, and in a way that a later maintainer will understand, due to good structure of the code, naming, documentation and other practices.


Respectfully, I wonder what my online experience would look like if I took to reading the thoughts of others with such a negative perceptual filter, and felt compelled to create a conflict in response to every point of difference that I automatically take to be a slight. It seems like it would fill my time with unnecessary strife, and result in a generally miserable time for me and others?

If I think someone is wrong, does that necessarily mean they're acting in bad faith, that they're an idiot, and I'm entitled to bully them? What if I'm mistaken? What if I'm not mistaken and they are in fact wrong - does that make such a reaction acceptable? Effective? Pleasant?

For me, this is a single unpleasant exchange that I get to leave behind, forget, and never think about again. For someone with the aforementioned negative perceptual filter, this is an unpleasant exchange they'll recreate and relive in different contexts, again, and again, and again. I find that kind of sad, honestly.

The irony here is that you're clearly quite experienced with Lisp, and had you responded instead with "hey! not quite - here's what you might be missing about how Lisp tends to be used in production... ", this would have been a very different exchange! But instead you chose to call me a lying idiot, which - well - I honestly can't picture anything positive ever coming out of that. Behaving like a bully automatically undercuts anything else you may have to say, which is a disservice to the experience you no doubt have to share. And even if you don't feel like sharing it, why choose to randomly start a conflict? If the goal was to defend Lisp's honour, is that an effective method? Is anyone reading this going to walk away thinking "My, what a lovely and welcoming community Lisp has, I should go check it out"?

I'm out, feel free to have the last word. Let's see if you use it to be mean or not.


Why would the students be confused? By what exactly?

> This "everything looks the same" property of Scheme and of all LISP-like languages is a bug, not a feature.

But you are mixing up things here. There are things that look different. Most things in Scheme can be understood as function calls and match that syntax, but there is different syntax for define, let, cond, if, and others. Not everything looks the same. What you might actually mean is, that everything is made of s-expressions. That is actually very helpful, when you work with code. It makes it very easy to move things around, especially in comparison to languages like Python, with significant whitespace indentation.

> When the semantics is different, humans need the syntax to be different.

I learned multiple languages before Scheme, and they did leave their scars, but I find, that I do not need syntax to be that much different. Maybe I am not human.

> In contrast, LISP/Scheme make everything look the same. It is quite hard to even tell a noun from a verb.

Is that a feature of the English language? I have rarely had this issue in Scheme. Perhaps it is because I think a lot about names when naming things.

> This makes learning it and teaching it hard, not easy.

Maybe I only had bad classes and lectures before reading SICP on my own, but I found, that I learned much more from it than most teaching before that was able to teach me.

> Brian is selling a fantasy here. If you think Scheme is so great, look at this nightmare of examples showing the various ways to implement the factorial function in Scheme: https://erkin.party/blog/200715/evolution/

And what exactly is your criticism?

That there are many ways of writing the function? That is a property of many general purpose programming languages. For example we could look at something like Ruby, where it has become part of the design to allow you many ways to do the same thing.

Or the richness of programming concepts available in Scheme? Is that a bad thing? I think not. You don't have to use every single one of them. No one forces you to. But am I glad to have them available, when have a good reason to use them.

Surely you are aware, that the page you link to is at least partially in jest?

> All of this "abstractions first, reality second" agenda is just a special case of what I call "The Pathology of the Modern": the pathological worship of the abstract over the concrete. Everything modernism touches turns into shit. I am done with living in modernist shit and I hope you are too.

I don't know where you got the idea, that SICP lauds "abstractions first, reality second". This is not the essence of SICP. SICP invents abstractions, once it shows, that some previous approach was not sufficient. A good example is the whole "develop a package" thing, where piece by piece the requirements grow and data directed programming is introduced.


Empirical evidence that the 10x programmer is not a myth.

https://dl.acm.org/doi/pdf/10.1145/362851.362858

"Exploratory Experimental Studies Comparing Online and Offline Programming Performance" by Sackman, Erikson, and Grant. Communications of the ACM. January 1968.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: