Its because the problem you need to solve isnt that hard and already solved, it does not need to have a crazy complex novel solution. . All you have to do is present the problem and solution set. xtra math isnt some sort of complex system and it does not need to be, its stupid simple and does the thing its supposed to do.
I think this is the biggest flaw in LLMs and what is likely going to sour a lot of businesses on their usage (at least in their current state). It is preferable to give the right answer to a query, it is acceptable to be unable to answer a query - we run into real issues, though, when a query is confidently answered incorrectly. This recently caused a major headache for AirCanada - businesses should be held to the statements they make, even if those statements were made by an AI or call center employee.
I mean, in this context I agree. But most people doing math in high school or university are graded on their working of a problem, with the final result usually equating to a small proportion of the total marks received.
This depends on the grader and the context. Outside of an academic setting, sometimes being close to the right answer is better than nothing, and sometimes it is much worse. You can expect a human to understand which contexts require absolute precision and which do not, but that seems like a stretch for an LLM.
I think current LLMs suffer from something similar to the Dunning-Kruger effect when it comes to reasoning - in order to judge correctly that you don't understand something, you first need to understand it at least a bit.
Not only do LLMs not know some things, they don't know that they don't know because of a lack of true reasoning ability, so they inevitably end up like Peter Zeihan, confidently spouting nonsense
But most people doing math in high school or university are graded on their working of a problem, with the final result usually equating to a small proportion of the total marks received
That heavily depends on the individual grader/instructor. A good grader will take into account the amount of progress toward the solution. Restating trivial facts of the problem (in slightly different ways) or pursuing an invalid solution to a dead end should not be awarded any marks.
I don't know... here's a prompt query for a standard problem in introductory integral calculus, and it seems to go pretty smoothly from a discrete arithmetical series into the continuous integral:
"Consider the following word problem: "A 100 meter long chain is hanging off the end of a cliff. It weighs one metric ton. How much physical work is required to pull the chain to the top of the cliff if we discretize the problem such that one meter is pulled up at a time?" Note that the remaining chain gets lighter after each lifting step. Find the equation that describes this discrete problem and from that, generate the continuous expression and provide the Latex code for it."
It has gotten quite impressive at handling calculus word problems. GPT-4 (original) failed miserably on this problem (attempted to set it up using constant acceleration equations); GPT-4O finally gets it correct:
> I am driving a car at 65 miles per hour and release the gas pedal. The only force my car is now experiencing is air resistance, which in this problem can be assumed to be linearly proportional to my velocity.
> When my car has decelerated to 55 miles per hour, I have traveled 300 feet since I released the gas pedal.
> How much further will I travel until my car is moving at only 30 miles per hour?
Does it get the answer right every single time you ask the question the same way? If not, who cares how it’s coming to an answer, it’s not consistently correct and therefore not dependable. That’s what the article was exploring.
For a lot of embeddings we have today, norm of any embedding vector is roughly of same size, so the angle between two vectors is roughly same size as length of difference that you are saying, and can be expressed in terms of 1 - dot product after scaling
I don't have an answer for this really outside of silly ones like "strict equality check", but I assert that no one else does either, at least today and right now, and its an inherent limitation due to the nature of embeddings and the space it desires to be (cheap, fast, good enough similarity for your use case).
You're probably best off using the commercial suggestion, and if its dot product, go for it. I am no expert in this area and my interest wanes every day.
> Apart from the bloat, the main problem of Microsoft LinkedIn is that it does not let you export your contacts' infos, which really is a must-have feature of a contact platform.
Platform lock in is certainly intended (even though it sucks for users)
Consider euler circle https://eulercircle.com/ if the kid is interested to learn about modern math. They have online classes, and financials may not be a concern if the student is admitted and is strong enough.
When you look at some standard AI textbook, such as Russel/Norvig, you see that there is not much about being called „AI“. The simplest „intelligent agents“ are functions with an „if“ statement. The smallest Node.js application has more complexity.
It's a useful tool when examining the impact on moral questions, so much of the talk about the transformative power of AI becomes more clear you give up the pretence that introducing AI creates a new class of moral actor that breaks the conventional chains of responsibility.
A recent example of how people try to use this mystical power of AI to absolve you of responsibility of your actions is how UnitedHealthcare, an organisation largely in the business of suppressing health care to those in need, introduced an atrociously bad "AI" to help them deny applications for coverage.
In that example it is very clear that the "AI" is simply a inert tool used by UHC leadership to provide the pretext they feel is needed to force the line workers to deny more care without the whole thing blowing up because of moral objections.
I mean, cultural revolution was still going on 50 years ago lol