Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A complete 4-year course plan for an AI undergraduate degree (mihaileric.com)
183 points by oumua_don17 on May 27, 2020 | hide | past | favorite | 101 comments


No surprise that this doesn’t include any consideration for the profound moral, ethical, or social implications of AI and AI solutions.

Nothing about bias in models. Nothing about communication. Nothing about privacy. Nothing about security. Nothing about resiliency. Nothing about the responsibility of the individual practitioner. Nothing about the economic or ecological impact of the work.

Every undergraduate degree should include both general, and subject specific courses on ethics, morals, social, economic and ecological impacts.


For comparison, another poster mentioned a real AI course over at https://news.ycombinator.com/item?id=23322449 which does include this.

There is a tendency for articles by extremely talented, highly educated, yet relatively immature people to completely over-optimize for technical depth. It's a failure with many articles that get posted here. I wonder how the author might change their article after 10 more years of experience.


I don't know. ...it's a bit sci-fi to add ethical instruction to AI dev. We are still way way distant from that.

Are there ethical classes for Organic Chemistry majors? Industrial engineers? Physics?


> I don't know. ...it's a bit sci-fi to add ethical instruction to AI dev. We are still way way distant from that.

I don't know what you mean. Can you elaborate?

> Are there ethical classes for Organic Chemistry majors? Industrial engineers? Physics?

Good question. We do have these, although the application varies. Engineering disciplines probably have this more explicitly embedded from direct ethics-type classes through to insertion of failure case studies and the like. These degrees tend to align with specific professional organisations for mechanical engineering, construction, etc. Membership of these organisations may require specific courses, but most often this is pushed down to the educational institutes. Computer Science and related areas have tended to follow this direction.

For someone studying chemistry or physics, there is less of a direct link, but it's somewhat unusual for an ethics requirement to be missing from a syllabus. Someone studying these might move into medicine, finance, or go to a startup. There is quite a spread of requirements there, and the destination generally governs the these in a more organic way. If you took this at the purest level and the chemist moved into a chemistry job at a pharma company, then membership of something like the American Chemistry Society might be expected. This bring a specific set of requirements.

In any case, it is only a certain segment of the computing world which eschews ethics and professionalism. A large segment interacts with all of these other professions which do place higher value ethics and enforce to a higher degree. Even working internationally, you'll find that yourself bound by a code of ethics in your country, but limited by law in others.


> Are there ethical classes for Organic Chemistry majors? Industrial engineers? Physics?

Yes, yes there are.


"a bit sci-fi"? What does that mean? We allow AI systems to decide who goes to jail in some places - I'd say that's a clear enough sign that the field needs people thinking all the way through their work.

There's some good entry level books on the topic, like "Weapons of Math Destruction" that you might enjoy.


Some universities do have a mandatory ethics course, so yes? Not all of course, or maybe not even most. But that's not a good argument "x and y don't follow the law anyways why should we"


It varies. It's definitely more explicit in my experience of UK and EU systems. In the US, it's not always as explicit, but sometimes approach this in other ways, cf. Honor Councils. Computer science and related areas can be perceived as immature, and this conversation is one indication of this.


I seem to recall humanities courses once having served the purpose.


> No surprise that this doesn’t include any consideration for the profound moral, ethical, or social implications of AI and AI solutions.

Would you say the same thing for an undergrad in SE (actually asking)? I feel like the hype train for AI has moved it into a plane where it has to be some moral arbiter but it seems to me that the same demands could be placed on most fields of study including general software engineering.

> Nothing about bias in models. Nothing about communication. Nothing about privacy. Nothing about security. Nothing about resiliency. Nothing about the responsibility of the individual practitioner. Nothing about the economic or ecological impact of the work.

Maybe it is just my bubble but I feel like very few AI jobs actually deal with hot button ethics (facial recognition for dubious purposes etc). It is just that these fringes get all the attention as if they represent the broad community (they don’t). For instance I work in AI for satellite communications, most of the things you bring up don’t really apply (any more than they do to the software engineers I work with). My former work was in AI and computer vision in marine ecosystems. Again... your moral questions don’t really apply (privacy of coral reefs?). I agree it is important to be aware of these issues but most of the real jobs in AI I feel like are more scientific in nature.. not the privacy busting, bias inducing ones we hear about a lot. Again that could be my bubble to some degree I suppose though.


I absolutely do. It's why I was so quick to jump on that with this.

My personal belief is that every undergraduate degree should include these subjects. That's what separates an undergraduate degree from a bootcamp--in theory.

But I think it's especially important for engineers and scientists, because it is very often overlooked or discounted by virtue of the fact that there's so much "technical" stuff to get into.

I think schools should try to counteract that by producing more team taught material. e.g., let's not just do a regularly philosophy course, let's do a philosophy for engineering and science course that takes you through the same general philosophy, but with readings that focus on engineering and science related ideas and themes.

But we should also have courses, and woven into our courses, discussions of these larger issues. Engineers, especially, have such a profound impact on society. It's important for us to be trained to think about these things. Not what to think, but that it is necessary to do so, and perhaps some tools and reference points for being able to do so in a cogent manner.

Lastly, I think it's important for undergraduates to get some exposure to the ethical issues related to work: the relationship between the employee and employer, the intellectual property, integrity in science and methods, etc.


"It's important for us to be trained to think about these things."

This doesn't seem self evident to me.


I think courses covering professional ethics are now fairly common in CS degrees. For example, here is the current course at Edinburgh: http://www.drps.ed.ac.uk/20-21/dpt/cxinfr10022.htm


I got my BS in CS 13 years ago and I had to take a dedicated ethics course and also several of my other courses had ethics components. I don't know how common it is but certainly many colleges/universities have been doing it for quite some time.


I happen to have taken an ethics course but it wasn't at all part of my CS/AI education. It was just a humanities course in my undergrad that fulfilled some philosophy requirement. Did your CS program actually require ethics directly? If so that is rather interesting.


I've taken multiple ethics class as part of my degree, and to be honest, if you don't already understand morality, you need the course. But I don't believe that those courses really have the desired impact, cheaters will cheat. It is more like these courses are created to cover the institution against the risk of appearing to not be graduating ethical individuals, but they can't actually make moral people beyond catching people who break the honor code and expelling them.


Applied-ethics discussions and courses are not about making people ethical who might otherwise cheat; they're about making people aware of what the generally-acknowledged ethical concerns are in their field, so that they can reach better decisions.

(By way of example, the topic of bias in AI models would in fact be covered in any introduction to stats, including an extensive discussion of the well-known tradeoff between bias and variance.)


It should apply to every degree. AI isn't special in that regard.


A post that is of this level of quality shouldn’t be degraded because it left out a category that you find profoundly interesting.

This is a goldmine for people interested in the field of AI, what you do with your talents as an engineer is up to you - build weapons, build medical systems, build whatever you want.

Your moral compass isn’t going to be set by an extra class in your curriculum, it’s set by your life experience.


I'm really skeptical as to the value of these courses. I agree absolutely that ethics/morals/ etc. are essential things to learn and develop. I don't agree they can really be learnt in the classroom. Many of these things (e.g. ethics) are emotional intelligence skills that don't really respond to standard study. In engineering school I did all these courses and didn't get any benefit from them, other than some academic knowledge of Kantian vs Utilitarian ethics.


CS grads should be required to take a year of liberal arts at least.


wrt ethics/morals; Is any business gonna pay a SWE for that, or are they gonna pay a lawyer?


University isnt about only learning what is useful for your employer. Its to you get people who hopefully do not end up building software with racist biases build in, or hardware that refuses to work if you do not have light skin.


Are you talking about any software, or computer-vision specifically? What does a web dev need to learn about racism in software?


Possibly that people from underprivileged backgrounds are unlikely to have high speed and reliable network services, that using image mapping makes a website unusable by the vision impaired, and that making a website such that it's layout changes after loading can prevent navigation by systems for people with physical disabilities?

While in most forms of engineering, a lack of ethics can demonstrably lead to death (and AI is no exception when you look at proposed use cases in military and law enforcement) even web developers work on things that are for the public good.


Accessibility isn't an ethics topic. Planes falling out of the sky is bad ethically, but when engineers study aerodynamics, they are not studying ethics. No one is suggesting engineers must attend an ethics course to prevent planes falling out of the sky - that's what we train lawyers for - to sue the shit out of those responsible for letting that happen.

A web dev isnt' going to think about "underprivileged backgrounds" etc - that's up to whoever decides who their demographic is. If you are building a website to sell Teslas, do you expect the dev to stamp their feet and demand a low-fi version on behalf of the underprivileged? Whoever is paying for development, it's their decision what to pay for.

"learning what is useful for your employer" is exactly what university is about, is exactly what you pay for, and is exactly what is expected.

It used to be the case that a "professional" was someone who cared about things at industry scope, not just their employer. SWEs are still considered cost centres, have no licence to revoke if they behave "unethically", and no bar, or association to make such a judgement anyway. Without a licence as a barrier to the profession, any uncooperative dev would find themselves outsourced for the next dev who doesn't give a toss.

Doctor and lawyers where always the benchmark for these kind of things, and it's arguable that even those professions have become corrupted by corporate influence.


First off, Ethics course requirements are incredibly common in engineering studies and I've never heard someone accuse lawyers of enforcing ethics. They enforce law and contracts, and are inherently reactive. The existence of lawyers has certainly not prevented a variety of engineering disasters that could have been avoided if the engineer had chosen to act differently.

Now, is it an ethical dilemma if you are asked to write firmware that disables a redundant safety sensor unless the customer pays additional fees? Is it a ethical to use a dataset that you may know is not fully representative for a crime estimation algorithm in order to achieve cost targets for your contract? Is it ethical to write a software that changes device behavior when being examined by regulators? Is it ethical to design an interface for a public service that will be inaccessible to a portion of the population to meet contract deadlines?

>learning what is useful for your employer" is exactly what university is about, is exactly what you pay for, and is exactly what is expected This is a warped viewpoint that assumes you have no agency. I studied engineering because I wanted to learn how it worked and how to build things. Employers be damned, I have no issue working for myself if I cannot find an employer who I can come to terms with.

If you mean to argue that the standing of engineering as a profession has greatly fallen then I agree with you. Especially in software. However, even if there is no license to be revoked, and no personal feeling of responsibility, the engineer who makes these design choices is not shielded from the legal repercussion of his actions no matter how many times his supervisor told him to even assuming he is willing to be responsible. No matter how much you may wish otherwise, engineers are still considered to hold the public trust and expected to behave accordingly. And you should consider it because when you comply with your employer to do something unethical and shit hits the fan, they are going to put you immediately in front of the firing squad to shield themselves from as many bullets as possible just the same as they are going to take the reward for as long as it flies under the radar.


How else do you enforce ethic other than reactively? Until we have minority-report style pre-crime reporting, lawyers decide who is punished on ethical grounds.

> Now, is it an ethical dilemma if you are asked to write firmware that disables a redundant safety sensor unless the customer pays additional fees?

Not for the engineers. They get a spec, they don't write it. Is it unethical to stab someone on the street? yes, but not for the knife manufacturer. Beyond that, it's for lawyers to decide the laws, and how they are enforced e.g. limits of who knifes can be sold to, and punishments for non compliance. It's likely the above scenario doesn't, because it's illegal, not because SWEs refused to do it on any other grounds. If it isn't illegal, they'll just find someone else to do it.

> engineer who makes these design choices is not shielded from the legal repercussion of his actions no matter how many times his supervisor told him to even assuming he is willing to be responsible

There has to be a law to break. If you are assuming everything unethical is illegal, why bother with ethics - just give engineers training in the law. Or, don't bother, and instead run everything by a lawyer.


I love initiatives like this, but IMO if you're doing a self-study "degree" aimed at practical AI knowledge, I would:

* Drop the compilers and database courses.

* Add an intro to statistics course.

* Pick one domain, since you'll need deep, proven domain expertise in one area to get hired without a real degree. So doing both NLP and CV, for instance, is probably a bad choice if this is your endgame.

* If you choose CV, add a traditional image processing course. I don't recommend to rely on whatever happens to be included in the DL for CV course. You might also want to add a good general purpose DSP course.

* Substitute exams for projects. Exams are adversarial so they don't make sense to give to yourself. You'll learn more more doing a project than you would cramming for a fake exam.

* Try to finish the whole thing in 1.5 years, not 4. Spending 4 years on a practical curriculum that doesn't yield an actual degree is a huge waste of time. Try to get the theory out the way in a year and a half so you can jump into real life projects/research. With dedication and without a rigid university schedule that forces slow progress, this is completely doable.


Compilers may not be as relevant for AI-related work, but databases are, imho. So much of real world AI work is setting up data pipelines. Knowing how transactions, indexes, and joins work is very useful.


True


This is missing a course on software testing, particularly one that has a focus on testing software-implemented models.

I'm almost done a PhD in mechanical engineering. Some CS folks have lamented gaps in my programming-related education, but I actually did take an elective course on the reliability of science which included a lot on scientific software testing. I think such a class should be required for anyone working in theory or simulations.

In engineering, concerns about the testing and accuracy of a model are typically called "verification and validation". There unfortunately doesn't seem to be a standard curriculum for the subject yet but Wikipedia can give you an idea of what this covers and how it's different from general software testing: https://en.wikipedia.org/wiki/Verification_and_validation_of...

AI/ML seems to have their own culture around this and I'm not sure its actually rigorous. Seems to me that they could have their own version of the reproducibility crisis.

This course should be taken after a statistics course. Parts of it rely fairly heavily on statistics.


AI is having some reproducibility issues, especially reinforcement learning:

https://www.wired.com/story/artificial-intelligence-confront...

In other areas of machine learning, things have gotten a lot better. A decade ago people rarely released their code or trained models. I did this for an early feature learning paper in 2010, which led to it getting a lot of citations.

What I'm more worried about is the lack of science in machine learning papers. The code reproduces the result, but the reasons for the efficacy given in the paper are spurious. I like this recent paper that pokes this issue in metric learning:

https://arxiv.org/abs/2003.08505


Thanks for the links.

I would consider the science issue you mention to be related. A lot of what was covered in the "verification, validation, and uncertainty quantification" course I took was basic science. The class wasn't only about software testing. Software testing was just a major tool covered in the class for the larger goal of making reliable scientific claims. What I wrote previously wasn't clear on this, so I clarified my previous comment on this point.


Baking in a software testing in an already tight 4 year AI/ML focused undergrad seems misplaced. Maybe in a masters, but most masters have Scientific methodology baked in to them so I'd still say it is redundant. CS undergrad should definitely do some level of testing though.


I think not making the 4th year primarily projects would help. In my mechanical engineering BS my 4th year was a combination of projects and electives (ignoring the senior design class). At the very least a class on software testing should be offered at the level of an elective.

The curriculum doesn't mention other required courses like general education, so perhaps the author thought electives weren't worth mentioning either.


This recommendation only specifies 9 credit hours (at most) a semester which free study in the 4th year. I know that you have to build in room for your cross discipline and electives, but it's not a very tight schedule. Which is good because it's also missing Differential Equations, Set Theory, Fourier Analysis and Stochastic Calculus all of which are mathematical underpinnings of AI. I also agree that they need a class on Software testing or validation or something along those lines.

A crash course in Filter theory, so that people have some historical background of where these methods derived from would be useful as well.


I'm not seeing an actual statistics course in there. Also GOFAI techniques seem to be only briefly mentioned in an "Introduction to AI" course, which just doesn't cut it in my view - you'd need something like an Operations Research class to really explore that stuff in more detail. Some of the more focused CS content could be cut to make room for this; OS's and DB's has been mentioned already, but maybe one could also remove the Compilers course.


Yeah, I'd probably remove compilers, and add classes on statistics, experimentation and actual data analysis.

I kinda find it hilarious that there is no mention of analysing data in this course, given that that is a core skill that all data professionals actually need. I mean, where do they think the models come from?

There's also little to nothing about ETL, which is pretty much all you do in most data sciencey jobs.

I actually think that this course is pretty revealing of the worldview of a lot of AI researchers, and may be why predictions of AI's imminent dominance across multiple fields appear to be not particularly accurate.


> "There's also little to nothing about ETL, which is pretty much all you do in most data sciencey jobs."

Maybe one could design a custom "Databases and ETL" class for this, to make it useful for more than just data-sciencey jobs. ETL is usually covered to some extent in stats classes though, AIUI.


I think a lot more maths is needed. "CS109 Probability for Computer Scientists" does not look like a sufficiently deep study in statistics.

Spending time studying operating systems and compilers (both things I've studied) at the cost really submersing yourself in the maths of probability seems to me to be a misallocation of a scarce resource (time).

e.g. a course in Bayesian statistics, and also in traditional statistical inference would be useful (for instance, this gives a good foundation for understanding the EM algorithm.)


I dont understand why

> Convolutional Neural Networks for Computer Vision

is taken without traditional CV before, while NLP it says "include traditional NLP". I'd say it is a big nono to jump in to CNN directly without some basic CV, even-though CNN is outperforming most algorithm you will miss big fundamentals.

Also most first/second year courses seems too CS focused, such as OS and Databases. I would remove those two things, add Differential Equation & Discrete mathematics (and numerical methods, maybe).

At best the over focus of general CS stuff seems to create "CS with some ML experience" rather than a solid foundation of the principles behind it.

This will give you a shoddy CS knowledge, and more fragile AI/ML understanding than someone with AI/ML background.

I would call this program "Data Science" but not AI.

Edit: removed "looks good" prefix after a second take at it.


The lack of statistical content disqualifies it for "Data Science" as well


This is a well written article and has really good courses and topics in it.

Cynically though, I wonder how many people of those who liked it and file it under "I should get to this at some point" actually use it at all. In AI it's been a cliche now how many introductory blog posts and into YouTube videos and "how do I start" Reddit and Quora questions there are. The resources and buzz is very high for the first steps. I guess it's similar to "how do I make a game" or "hó can I learn to hack" are popular.

As I said this one is very solid and has very actionable pointers to great courses. But I find that only a small minority of people are so obsessed that they can pull this huge project on their own. It's a multi-year undertaking and even though the resources are freely available, a normal homo sapiens just doesn't work like this. For all but the wild outliers (who would find the info anyway, with no Obstacle able to stop them), the context of a university program is really necessary. It gives you time, structure and social motivation, discussing with people in the same boat, helping them out, getting help from them, in person, being forced to chew through the boring bits, not getting satisfied with your own self assessment etc.

Otherwise I think this also just gets thrown onto the pile of bookmarks that we all build, to make ourselves feel good about a future day where we somehow magically motivated and sharp to tackle and learn all those things we bookmarked.


> "n AI it's been a cliche now how many introductory blog posts and into YouTube videos and "how do I start""

one of the frustrating things its really hard to find great advanced content, and when you reach out to HN readers, they are like 'read really advanced PHD papers'. thats not what I mean at all >.<


There are textbooks, but people (myself included sometimes) want something flashier something more palatable. But other than conference talks and papers, textbooks and lecture slides, there just isn't much out there.

It's a consequence of how the sausage is made. Textbooks take years to write so only established topics can be included in them. Researchers are under time pressure and chase performance criteria. Publishing at a good conference pushes you along your PhD path, but writing a blog post for a niche advanced target audience of maybe 100 readers is not always worth it. Especially when you have other duties beyond research like teaching courses. Industry blog posts are good, but also cannot go very deep as the audience would be lost.

Also the nature of advanced stuff is that there is less of a clear established path forward. There are tons of small specialized communities that often don't know much about each other, they are in different departments etc.

Your best bet in becoming more advanced is to use the above mentioned resources and work with people who are more advanced than you, either in academia or industry. There is no law of nature that everything must be achievable by browsing the free internet without leaving the house.

An important lesson when you get to a point in your studies is that there is no set Platonic chapter-by-chapter structure in knowledge and science. A common failure mode is to think that learning is about leveling up in some achievement tree like in duolingo, and being puzzled where the next chapter is. You have to seek it out, tackle a practical problem either from your own idea and doing a side project or by joining a group as an assistant or a company as a junior engineer/data scientist.

A lot of it is "dark knowledge", hidden inside organizations, people learning from one another. There are best practices that "everyone" just knows to do by discussing by the coffee machine, it's not in any book or paper. And papers are written in an obtuse language to not give away too much (because that will be in the next paper).

The interests of an advanced practitioner reader are markedly different from the paper writer researcher's incentives, so this can make papers hard to read and decipher what to take away from them. The goal of a paper is to frame a small incremental tweak in the context of the larger scale literature in such a way that the reviewers are sufficiently impressed to click accept. Papers are obsessed with how they fit in the academic community's pursuit. They aren't tutorials or howtos or guides. Fortunately people are now releasing more and more code. The code is often more enlightening than the aggrandized math of the paper (which often boils down to few lines of code).


More of a meta question: but why should all undergrad degrees be 4 years? Why should an undergrad in computer science and one in AI be the same length of time?

Was it created because a school wants to charge the same amount regardless of major? Or because 4 years is an acceptable proportion of a human life to spend studying to obtain a white collar job half a century ago? Or some other reason?

(Related, why do people want the 4 year degree as a pre-requisite for most jobs? Why do some jobs -- law and medicine -- require 3 and 4 year degrees on top of it? Why is the CFA happy with 3 years of tests on your own time?)


It's probably a combination of all your questions. Simpler fee structure that takes a standardized part of young adulthood that provides a standardized amount of expertise, leading to "an undergraduate degree" denoting a certain amount of breadth and depth. It's about what a typical college student can learn in four years.

Some fields like law and medicine require more time to get practically useful, like 7ish years for law, which we can break down into units of 4 + 3, an undergraduate degree and a law doctorate (JD). Or medicine, which takes like 15 years, and is split into 4 + 4 + 3-5 + 1-4, an undergrad degree, a medical doctorate (MD), a residency, and a fellowship.

Maybe splitting it into these points helps standardize those and provide convenient points where paths split.


Physicist's perspective. Logically an undergraduate degree is 3 years: the first year is catch up and baseline, everyone gets to the same standard. The second year is actually learning the bulk of the core material. The third year is specialisation and a dissertation project. This also allows for people to fail and resit years as major milestones.

Many technical courses overlap. Mathematicians in particular are generally allowed to do anything. I think in the US it's even greater with the minor/major system, so courses like calculus/linear algebra are enormous because everyone from engineering to chemistry might sit in.

The majority of undergraduate degrees in the UK are 3 years, 4 if you do a Masters on top. You could probably do the material faster; probably 2 years? But... Physics had one of the highest course loads of all the degrees, 25 hours a week in second year. That's just lecture time, you're expected to do that time again outside. We had almost no free time in the first year due to the amount of compulsory coursework (the crap stuff which requires an expensive 99th edition textbook with a code in it). In later years there was less compulsory homework so less pressure, but then you struggle if you skip it.

Exams.. well they had to schedule four years' worth of exams with fairly little overlap, plus time in the summer term for extra modules that wouldn't fit in the second semester. So that takes a couple of months. I guess if you had the capacity you could also optimise that into a big two week block for all students, but that would be challenging.

We had 10 week terms/semesters (Oxbridge have 8 officially, but really it's 10) and the rest is effectively vacation. In my day (hah), internships weren't really a big thing. I did a research placement at my university and a few people got industry jobs, but not many. Most of us just enjoyed a month of vacation over Christmas, we revised over Easter and we took the summer off. It wasn't like CS today when people try and get a different FAANG job every summer.

The dissertation was budgeted at I think 2 days a week for two semsesters (though you were expected to work more than that).

If you cut down holidays then, you could probably squeeze the program into 2 years. That would be an unpleasant experience.

I believe in some countries like Austria, you can get a degree by sitting the requisite courses on your own schedule.


I think it shouldn't. It's simply used as a heuristic or a comparison against actual degrees.

If you know what you want to build and can optimize a curriculum towards it, then go for it. For people that just want "fundamental knowledge" in a field. Knowing that they spend about 5% of their life on it, is probably a way of feeling that they have spent enough time on it.


Mainly cultural reasons I'd say. In some countries you jump right into a professional program like medicine right after high school. But then it takes 6 years or something. The net long duration is still basically the same due to all the additional specialized knowledge needed for professional degrees.


This seems optimized for AI in the hyped sense (as shorthand for applied machine learning and data science) rather than actually preparing a student to contribute to "real" AI (see work of Josh Tenenbaum or Brenden Lake). For an undergrad curriculum for the latter, I'd expect to see more on understanding intelligence in extant biological systems, eg courses in cognitive science, neuroscience, child cognitive development, and more background on animal cognition. I would also expect to see more robotics and at least some treatment of reinforcement learning.


I would also move up the second year "intro to AI" course to the first year, with an emphasis on history. You shouldn't be hearing about AI Winter for the first time after already spending a year on the subject.


This is a course plan for computer science. I don't see why Artificial Intelligence should have its own "degree". However, there is a trend towards these niche degrees at universities too.


We have never considered splitting medical school into subdomains at undergraduate level. Why are we trying to do this now for Computer Science?


Because most people don't want to be computer scientists, they just want to land an extremely high-paying job doing <trendy overhyped computer topic of 20XX>.


Yes. Many surgeons also want to get all the sweet prestige and feel like demigods among the mortals and don't all necessarily get an intrinsic excitement from sawing those bones.

It's fine to want money. Who are we to judge someone who sets the goal of getting a good job, looks into upcoming fields that pay well and then applies to a university to spend years studying it. It's perfectly fine and rational. In free tuition countries this may be someone from a lower economic class, crawling out of their situation.

It's legit and okay not to be a wunderkind from age 2 who built computers with his dad from the get go in order to be eligible to study CS or AI. Of course once they are in, they have to go along with the program.

But I get annoyed with this gatekeeping attitude that only us nerds are worthy to learn AI. If you are smarter and put more effort in, you don't have to become bitter, there is nothing to fear, you will be able to demonstrate your expertise and will still get good jobs, even if there is a bigger supply of the those who are only in it for the money.


It's not a criticism, really (there's nothing wrong with wanting a job) but parent comment was asking why an academic field is being balkanized into a variety of specialized trade school topics, and that's why: to try and produce graduates who are immediately employable in the technology du jour.


Yes, on the other hand it's often the case that those chasing the jobs in the hyped sector can get caught in phony parts. They can be easier to sway and ultimately be milked more, than those who are on a more stable footing. It seems often to be the case that such specialized new programs get a cohort of students with this attitude, whey want results and have a "teach me" mindset with poor average grades across the studentship. This in turn makes the uni reduce the difficulty and in depthness of the program compared to the traditional CS program of the same university.


Dentists, nurses, physiotherapists and physicians are all degree level medical specialties. Engineering is split. What’s the argument against doing it for medicine other than tradition?


> Engineering is split. What’s the argument against doing it for medicine other than tradition?

One might argue medicine is a split-off piece from natural sciences.

A counter-argument might be that while the natural sciences are almost always split, where they aren't, such as at Cambridge, medicine is still of course separate.

I think the real reason is probably just that there's more value to most medicos in a whole-body understanding than there is to most engineers in a multi-disciplinary understanding.

I'd quite like to need to routinely design electronic circuits, CAD/CAM packaging for them with certain mechanical constraints, and develop software to run on them in my work, but I don't; that'd need to be a very small company working on a physical product for that not to be at least two people's jobs.


I feel that mechanical engineers should have a bit of understanding about what is going on behind the scenes when they click on things in a CAD package.

From talking to recent students and current professors, I'm not sure they are learning this as part of a degree course.


By 'behind the scenes' do you mean the physical objects that they're modelling, or how the software works?


I mean the kind of data structures that the software is operating on, in particular the ones that can end up in an exported file.


Yes, maybe my approach and perspective are skewed by traditional thinking. Unfortunately, I'm still failing to see a need for AI split in CS. An AI student might not need to learn basics of programming languages, operating systems, OOP, DB management, and many other core topics for CS but what else remains to be honest? If we were to simply convert elective AI courses to "must", then are we going to offer classic CS courses as "elective"?


The comment is more about splitting it at the undergraduate level. I think most of the medical specialties you listed will have basically the same undergraduate experience, though I could be mistaken.


s/computer science/Mathematics/ s/Artificial Intelligence/Computer Science/

New fields start somewhere.


Except that ML and AI aren't new fields at all - they are over 60 years old and predate CS education by decades.

CNN, LSTM, and RNN are just special techniques in a much broader field that has been around for a long time.

Thinking ML/AI is new is exactly equivalent to thinking that electric cars are new...


Linear regression, correlation and general statistical minimisation approaches are almost a century old.

The only real difference is that computer scientists are cool and have hip new terms (and money), while statisticians are boring and uncool.


CS was around many years before becoming a major.


There's a lot of weird choices here. The lack of math/stats is glaring, the inclusion of compilers seems to be at the cost of more relevant material for someone focusing on AI (speaking as someone who studied CS with a specific focus on PL/Compiler type things). You don't just throw project based in at the end (this should be through the entire degree), and there's two systems/OS courses included but no networking?

Generally I think having an AI degree is fine, but at the end of the day its a CS degree + a concentration no matter how you slice it. This isn't like CS splitting from math, etc.


I find it pity that while "Compilers" course is always about implementing compilers, "Databases" courses are 99% about using database systems.


I never really though of that, but that's pretty true. I guess that using a compiler is generally a lot simpler than using a database. When using a compiler, the majority of the time you're doing something really simple compilation, or you have a build system take care of the entire thing, where as the database requires design considerations. That would be my guess at least.

But I do agree that a database implementation course would be fantastic. It's one of the most important categories of software today for sure, up there with operating systems and compilers. My Uni was going to offer a course this Fall, but COVID has led to it being delayed. Here's to hoping it's offered in the Spring.


I would argue that the "using a compiler"-class would be the programming class!


"Operating Systems" courses are mostly about OS dev as well.


God the CS field is really getting muddied by AI hype. Please for the love of god start with the fundamentals.


4 year degrees in AI already exist.

Here is Edinburgh: https://www.ed.ac.uk/studying/undergraduate/2020/degrees/ind...

Here is CMU: https://www.cs.cmu.edu/bs-in-artificial-intelligence

I think the above degrees are more balanced than OP's proposal. More emphasis on foundations, and courses on ethics. Operating systems and compilers are useful if you go into AI engineering but are a diversion from core AI topics.


Edinburgh has always had a reputation for being a good HPC/computational institute, so no surprises there.

It's also nice to see that you are forced to take options from pretty much any other subject, which is quite rare in UK degrees. We had very tight restrictions over which external courses we were allowed to take, presumably due to tight scheduling and a huge number of compulsory courses.


But are they free? I think that's OP's point.


I don't think that is the case. I just clicked a random course in their curriculum (Convex Optimization) and it links to a Stanford course that requires a login to access the videos.


Yeah, that's not what my 4 year AI curriculum looked like. Mine focused more on Expert Systems. Also more logic, more software engineering, more psychology and philosophy. But much less algebra and calculus, less computer systems and parallel computing (though I took them as elective), no compilers (I don't really see the need either, though it's an interesting subject), and clearly not enough machine learning.

I also would have liked to see more robotics. Our neighbouring university had an AI curriculum that focused more on robotics.

If I could design a curriculum like this, I'd keep the focus on software engineering and logic that we had (maybe a bit less logic than we had; they were overdoing it), keep algorithms, keep some philosophy and psychology, but make them more focused on our field and include things about social impact and ethics. Definitely more linear algebra and calculus, some statistics, machine learning, and get some basics for vision processing. After the first two years, you'd get to choose between more focus on vision and robotics, or more focus on logic, statistics and expert systems. Machine learning should be included in both.


I actually was on a committee to do this for a university. Ultimately, some believed it would take resources away from computer science and compete with the program, so it was a year of work without anything to show for it.

I'm still a proponent of undergraduate AI degrees, but this proposal doesn't suffice. Deployment, testing, dataset collecting, statistics, more math, better organization of electives (computer vision, robotics, NLP as applied electives), bias mitigation, ethics, and more need a role. The state and university guidelines also provide a lot of constraints. I do think that a research or applied project is essential, which is captured in this proposal.

I'd definitely remove operating systems and compilers. I took both courses at the graduate level and have been working in basic and applied AI research for 15+ years, and those courses haven't helped me with that.


An AI degree that doesn't teach anything about learning about the real world from data (the stats that scientists get) seems really weird, even if that does seem to be the state of things. AI is about learning and decisionmaking, and the learning side is currently pretty heavy on the pattern recognition side, but why bake that into the education? You can't teach someone to automate learning without teaching them to learn the old fashioned way. There's a reason basic statistical inference and experimental design is such a foundation for every scientific field, but it's always missing in these AI progressions. It's sure as hell present in the jobs people going through these degrees will probably end up in.


So, people are just supposed to start their 4-year undergraduate degree in Linear Algebra? I know HNers are rare air, so to speak, but what about Calc 1-3 and Diff Eq?

This is like a 4 year plan once your first two years of university are under your belt.

So a combined CS / MS in AI?


Linear Algebra is quite self-contained. You'd need Calculus in there to really make sense of the probability and stats, though. Not sure about the vector calculus part and diffeq's, maybe that can be cut and the whole thing might fit in 4 years.


I just can't see how a single self-contained Linear Algebra course could in any way prepare one for the actual data science behind AI and ML. But your point is well taken otherwise.

Edit: Reviewing Bretscher (Linear Algebra with Applications, 5th), it appears you are correct; it's not until you study the applications that you need Calculus. Most Unis I'm familiar with, however, will require multivariable (Calc 3) at least as a pre-req for Linear Algebra.


A self-contained linear algebra course would be a lot of heavy theory for freshman. Doing multivariate calculus first just makes it easier.


Reading the comments in this thread, I am surprised to see that many think it lacks math. I am majoring in applied math (all my professors are french-school mathematicians, in a non american university) but I deem my courses excessively theoretical. It doesn't help that the degree is 6 years long.

In the meantime I've been solving coding problems but still feel like I lack something in order to make working AI programs. Maybe someone can tell me if the non-math courses listed there are more than enough for a person like me to get into AI.

If you're curious about my background, feel free to ask.


Lots of suggestions about math, ethics, SWE; I’d suggest some biology. Especially biology of sensory systems. Both for some understanding of the systems that already solve these AI problems really well, and for better understanding of human interface requirements: what do we really hear and see in these AI-generated stimuli.


Why no 3 month bootcamp?


This also seems to be missing any coursework outside of CS/Math.

People certainly disagree about the overall value of general education requirements, but a writing course, even a technical one, would fit nicely here. No point in doing amazing work if you can’t tell people about it, after all.


The proposed curriculum looks very similar to a computer science curriculum w/ an AI minor/specialization. I'm curious why a student would opt for a specialized AI curriculum rather than a broader CS one? Wouldn't the CS provide more options down the road?


For comparision, here is the AI bachelor degree from JKU Linz: https://studienhandbuch.jku.at/curr/714


It's good that it includes an AI and Society course.


Professor and curriculum engineer here who has created an undergraduate degree in CS/IT. Proof: (https://www.rtc.edu/net-arch)

First off, this is a great starting point! There is a goal for every year, the courses clearly build upon each other, and there is an end-goal. I am a big advocate for project-based learning, and I'm seeing more employers seriously consider portfolios like GitHub and blogs. No employer cares that you got a 103% on your algorithms exam, and few vaguely care about your GPA. They do care if you can hold a conversation about a topic. They do care if you applied the knowledge, like building something that uses a tree, even if you didn't explicitly write all the data structure code yourself.

This curriculum is designed for the elite, think a student that would do well at Stanford. This is probably an obvious comment, as the majority of courses come from Stanford, but it's important to note that the breakneck speed of this curriculum targets the top 1-5% percentile of students. If our hypothetical freshman is a high school graduate they need to have done well on the BC Calculus, AP Statistics, and AP Computer Science exams at minimum. It also assumes that the student has enough computer experience to pick up advanced skills rapidly. You could argue that "Introduction to Computer Systems" and "Programming Fundamentals" are entry level, but reading carefully they are closer to the 2nd or 3rd CS course a student would take in a normal sequence. Week 2 in the programming fundamentals talks about Stacks and Queues, and Week 1 in Intro to Computer Systems introduces Unix, the CLI, and GitHub all in one week. These are very advanced topics for students that come in with 0 knowledge. You will need either rigorous admissions requirements, or prepare for a high first-year drop out rate.

We tried a 4th year very similar to what was described in the article. Students could pick a final project done in partnership with industry, or report on their internship/career if they were already in one. Students responded with "What do I need to do to get full credit?", "Can you give me step-by-step instructions that I can follow?", and "What will be on the exam?" (there are no exams in the program after year 2). After 3 years of being explicitly told what they must do to succeed, students have correctly learned that going above and beyond in a class rarely yields extrinsic benefit within the scope of a class. To put it another way, students are trained to work efficiently in a classroom environment, "Exactly as hard as I need to get the exact grade I want, and not 1% more." It's not necessarily a bad mindset, but we need to train students out of habits that are bad if applied outside of the classroom. In the case of my program, we chose to set a bar in the 4th year but didn't tell students how high to jump. This translates to removing a lot of clarity in assignment rubrics, and providing direction but not answers. A lot of students flailed, but none failed (yet!). Graduates report a smoother transition into industry. We also introduced GitHub in their third year as a portfolio tool, few students have enough knowledge to understand and use it before then. At the end of every quarter students build a final project in groups, and apply all of the skills that go into a good repository (collaborators, pull requests, readmes, CI, etc.). While not all employers are looking at the actual projects, all students who have graduated thus far have said they used the knowledge of talking about a project in interviews. On the internship/project classes/research front, unless you are using your reputation to open those doors for students (as I had to), a student's best chance of arbitrary contribution is through GitHub Issues and PRs. Your college's reputation is also a major factor here; In the case of my specific college, Amazon Web Services won't work with us because of a teaching agreement that went sour before I was hired.

Some critiques from others I'd like to comment on, in lieu of the author: "Why isn't there core class X?" - Most students can't handle more than 2 core classes per quarter. That means 24 core classes over a 4 year period. Choose wisely. I think the author chose well, but no plan survives it's first encounter. My curriculum certainly didn't, and 3 years in we've thrown out almost all of the curriculum we initially wrote.

"No ethics, soft skills?" - This is a core class curriculum. Electives and additional classes for "rounding out" a student occur during at a later phase of curriculum design, usually when it's time to meet accreditation standards. From there you will not like the second part of this answer. In my 3 year study on building an IT bachelor's (https://www.nsf.gov/awardsearch/showAward?AWD_ID=1601140), we found ethics training to work AGAINST a student's hiring prospects (cue pitchforks). Our Business and Industry Leadership Team (representatives from technology companies) rated "teaching ethics" at an average of 2.1/5 (A 2 means slight disagree that this should be taught). They marked "ethical knowledge application" at 1.4/5 (A 1 means strongly disagree that this is important). The BILT team commented that they were unlikely to listen to a recent graduate's ethical concerns on a project, they would regard a graduate raising the concerns as a 'nuisance', and that they would fire the graduate if their ethical concerns significantly impacted their job performance (cue more pitchforks). Finally, consider the fresh graduate's perspective. The typical first-job-out-of-college mindset is "I have a ton of debt. I'll take the first job with the largest salary." I see this advice regularly on HN.


I think CMU's 15-213 is better than Stanford's CS107.


I’d like to see a similar one for computer science


Have you seen this one? https://teachyourselfcs.com/


If you are serious, pick an institution and view any of their ABET accredited programs. Look for Computer Science or Computer Engineering and then go to the University to view the curriculum: https://www.abet.org/


I would not rely on ABET for this.

Stanford, CMU, Yale, Cornell, Columbia, and Princeton aren't ABET accredited. Devry Tech is, but I still think you should give Stanford and CMU a second look.


Machine learning is not AI, but a part of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: