Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Author here – I think you're probably right. I wrote the Gaussian elimination section more as a recap, because I figured most readers have seen Gaussian elimination before, and I was keen to get to the rest of it. I'd love to hear if other folks had trouble with this section. Maybe I need to slow it down and explain it better.


I actually really liked the gaussian elimination part. It's a term you hear often and 'demystifying' it is good imho.

Only nitpick I have is that it's a pity you use only 1 and 2 in the example with the carbs. Because of the symmetry it makes it harder to see which column/row matches which part of the vector/matrix because there's only 1s and 2s and it fits both horizontally and vertically...


Loved the article, and also the shoutout to Strang's lectures.

I agree with the order, the Gaussian should come later I almost closed the article - glad I kept scrolling out of curiosity.

Also I felt like I had been primed to think about nickles and pennies as variables rather than coefficients due to the color scheme, so when I got to the food section I naturally expected to see the column picture first.

When I encountered the carb/protein matrix instead, I perceived it in the form:

[A][x], where the x is [milk bread].T

so I naturally perceived the matrix as a transformation and saw the food items as variables about to be "passed through" the matrix.

But another part of my brain immediately recognized the matrix as a dataset of feature vectors, [[milk].T [bread].T], yearning for y = f(W @ x).

I was never able to resolve this tension in my mind...


To some, "Now we can add the two equations together to eliminate y: might need a little explanation.

The (an) answer is that since the LHS and RHS are equal, you can choose to add or subtract them to another equation and preserve equality.

If I remember correctly, substitution (isolating x or y) was introduced before this technique.


Positive proportion - negative proportion = 0.


I hadn’t, and your article lost me there to be honest. You didn’t explain the what, why, or when behind it, and it didn’t make sense to me at all. That said, I’m abnormally horrible at math.


> You didn’t explain the what, why, or when behind it

>> The trouble starts when you have two variables, and you need to combine them in different ways to hit two different numbers. That’s when Gaussian elimination comes in.

>> In the last one we were trying to make 23 cents with nickels and pennies. Here we have two foods. One is milk, the other is bread. They both have some macros in terms of carbs and protein:

>> and now we want to figure out how many of each we need to eat to hit this target of 5 carbs and 7 protein.


Noted! I may make a totally separate post on gaussian elimination. Could you talk me through what parts were confusing, and would you be willing to review a post on gaussian elimination to see if it works for you?


You're assumption worked for me... I've seen gaussian elimination before (but not the linear algebra) which gave me an idea of what we were doing.


Do you have any plan to turn it into a full book—maybe called Grokking Linear Algebra ?


Lol. Maybe! I did enjoy writing Grokking Algorithms, but writing a full book is a real commitment. That one took me 3 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: