Thinking back to high school and college, the biggest issue with I had math in both is how it's frequently taught in absence of practical examples. Speaking personally (though I believe there are many who feel similarly), there's a need to see real world examples to develop a true grasp on new concepts in any reasonable amount of time, for reasons related to both motivation and intuition.
> how it's frequently taught in absence of practical examples
Unfortunately this has been the debate for thousands of years. One of Euclid's students asked him what practical use geometry had. In response, Euclid instructed a servant to give the student a coin, saying, "He must make gain out of what he learns."
Legend aside, I'm actually surprised to see, many times actually, that people on HN criticized the "absence of practical examples". If we compare the textbooks written in the US and those in China or Europe, there's a sharp contrast. The textbooks from the US are thick, full of discussion of motivations and practical examples across multiple domains (Thomas' Calculus, for instance). In contrast, Chinese and European textbooks are terse and much thinner. They focus on proofs and derivations and have only sparse examples.
Personally, maths itself is practical enough. I'd even venture to say that those who can naturally progress to college-level maths should be able appreciate high-school maths for its own sake.
I agree with this in theory, but I don't necessarily need the "real world"-ness.
A lot of the stuff you learn initially, about systems of equations and how they relate to the matrix inverse, are very interesting because it's clear how this can be applied.
But as you move forward into vector bases and change of coordinates there's a very long dry spell that you sort of have to slog through until much later when you start to see how it is actually useful. I'm not sure how to fix this -- maybe take a step back from the symbolic and do some numeric computations because that's where they start becoming useful again.
IMHO there should be two versions of linear algebra. One for computer science majors and one for mathematicians. I regularly run into stuff at work where I say to myself, "self, this is a linear algebra problem" and I have next to no idea how to transform the problem I have into a matrices or whatever.
But I can write a really stinkin' fast matrix multiplication algorithm. So there's that I guess.
Modern CPUs with big ass SIMD registers are incredibly fast at slogging through a linear algebra problem. Lots of incredibly intelligent people (ie, not me) spend an incredible amount of effort optimizing every last FLOP out of their BLAS of choice. For several years the only question Intel asked itself when designing the next CPU was, "How much faster can we make the SPECfp benchmark?" and it shows. Any time you can convert a problem using whatever ad-hoc algorithm you came up with into a linear algebra problem, you can get absurd speedups. But most programmers don't know how to do that, because most of their linear algebra class was spent proving that the only invertible idempotent nxn matrix is the identity matrix or whatever.
Discrete math has the same problem. When I took discrete math in college, the blurb in the course catalog promised applications to computer science. It turns out the course was just mucking through literally dozens of trivial proofs of trivial statements in basic number and set theory, and then they taught us how to add two binary numbers together. The chapters on graphs, trees, recursion, big-O notation and algorithm analysis, finite automata? Skipped 'em.
Yes I’m currently dealing with text that has a line “you will end up with ~2^32 equations and from there it’s just a trivial linear algebra problem” without further guidance (from the general number field seive).
I get that 2^32 simultaneous equations may be a straightforward linear algebra problem but I am now going deep to understand the exact mechanism to solve this.