I think its the same as stealing a bike or a car parked on the street. I don't know the subtleties but I don't think you can presume something is abandoned merely for being left on the street?
It's really jarring to see this wave of nostalgia for "the good old days" appear since ~2025. Suddenly these rose tinted glasses have dropped and everything before LLM usage became ubiquitous was a beautiful romantic era of human collaboration, understanding and craftsmanship.
I still acutely remember the gatekeeping and hostility of peak stack overflow, and the inanity of churning out jira tickets as fast as possible for misguided product initiatives. It's just wild yo
Probably people complaining about AI today were fine with Stack Overflow before and didn't have anything to complain about back then.
I also had a better experience with Stack Overflow over AI. It's been unable to tell me that I couldn't assign a new value to my std::optional in my specific case, and kept hallucinating copy constructor rules. A Stack Overflow question matching my problem cleared that up for me.
Or, like me, the kind of questions in which I’m interested are answered in a way worse rate by LLMs than StackOverflow, like ever.
I have and had problems with StackOverflow. But LLMs are nowhere near that, and unfortunately, as we can see, StackOverflow is basically dead, and that’s very problematic with kinda new things, like Android compose. There was exactly zero time when for example Opus could answer the best options for the first time, like a simple one, like I want a zero WindowInset object… it gives an answer for sure, and completely ignores the simplest one. And that happens all the time. I’m not saying that StackOverflow was good regarding this, but it was better for sure.
I don’t think I’ve ever ask a question on Stack Overflow, but I’ve consulted it several time. Even when I’ve not found my exact use case, there’s always something similar or related that gave me the right direction for research (a book or an article reference, the name of a concept to use as keyword,…)
It’s kinda the same feeling when browsing the faq of a project. It gives you a more complete sense of the domain boundaries.
I still prefer to refer to book or SO instead of asking the AI. Coherency and purposefulness matter more to me then a direct answer that may be wrong.
> A Stack Overflow question matching my problem cleared that up for me.
Perhaps if there was no question already available you'd have had a different experience. Getting clearly written and specific questions promptly closed as duplicates of related, yet distinct issues, was part of the fun.
I find that AI hallucinates in the same way that someone can be very confident and wrong at the same time, with the difference that the feedback is almost instant and there are no difficult personalities to deal with.
I've found recent claude code to be surprisingly good at dispelling false assumptions and incorrect framing. I say this as someone who experimented with it last summer and found it to be kinda stupid; since December last year it's turned the curve - it's not the sycophantic nonsense it used to be.
I think most people found StackOverflow to be pretty easy and useful since it's a pretty small minority of people that ever asked questions on it so many people didn't interact at all with the more annoying parts.
> It feels so much better to ask humans a question then the machine
I could not disagree more! With pesky humans, you have all sorts of things to worry about:
- is my question stupid? will they think badly of me if i ask it?
- what if they dont know the answer? did i just inadvertantly make them look stupid?
- the question i have is related to their current work... i hope they dont see me as a threat!
and on and on. asking questions in such a manner as to elicit the answer, without negative externalities, is quite the art form as i'm sure many stack overflow users will tell you. many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!", totally useless to the question-asker and a much more frustrating time-waster than even the most moralizing LLM.
with LLMs, you don't have to play these 'token games'. you throw your query at it, and irrespective of the word order, word choice, or the nture of the question - it gives you a perfectly neutral response, or at worst politely refuses to answer.
That’s a level of paranoia that I can’t really understand. I just do my research, then for information I can’t access, don’t know how to access, or can’t comprehend, I reach out. People have the right to not want to share information. If it’s in a work setting and the situation is blocking, I notify my supervisor.
> many word orderings trigger a 'latent space' which activates the "umm, why are you even doing this?" with the implication begin "you really are stupid!"
You may have heard of the XY situation when people asks a Y question only because they have an incorrect answer to X. A question has a goal (unless rethorical) and to the person being asked, it may be confusing. You may have a valid reason to go against common sense, but if the other person is not your tutor or a fellow researcher, he may not be willing to accommodate you and spend his time for a goal he have no context about.
Remember the car wash question for LLMs? Some phrasing have the pattern of a trick question and that’s another thing people watch out for.
I strongly believe that if you set aside genre preferences the solid body electric guitar coupled to a tube amplifier is objectively the greatest electronic instrument ever created.
All other electronic instruments, with the one exception being the Theramin, have a fundamental problem with human expression. There is an unsolvable disconnect between what the performer's actions and their audience.
With an electric guitar you get the physicality and dynamism of an acoustic instrument with the complex timbres and extended technique possibilities of an electric/electronic instrument.
There are complex and musically significant feedback loops occurring across many dimensions that lead to extremely complex transformations of timbre via both traditional music theoretical techniques and the physics of a tube amplifier combined with an inductive load (the guitar pickup).
Its really crazy how much more dynamic and complex this can be then even a highly sophisticated modular synthesizer or whatever. Even the way you over load the power supply in a tube amplifier can be manipulated on the fly to enhance and transform timbre.
Then on top of all that it is so incredibly physical that a performer like Jimi Hendrix can manipulate these systems and have the audience intuitively understand what he is doing. Never in a million years would THAT be possible with any other electronic instrument.
The reverse example of this is musicians who play techno with analog instruments, like Pipe Guy, Basstong, and Meute[0][1][2].
There are always some people who get extremely defensive whenever I say that techno didn't click for me until I heard this kind of "techlow" music. Specifically about the part where I think that the reason is also a human expression problem, because of limitations imposed by the electronic media used.
EDIT: having said that, I don't think I would agree with your premise, because it is colored by a subtle form of survivor bias. None of us remember what it's like to not know electronic guitars or what they sound like, so claiming "the audience intuitively understands what Jimmy Hendrix is doing" is like saying everyone "intuitively understands" their native language. On top of that there's nothing about the workings of an electronic guitar that wouldn't in principle work for something like an electronic violin or whatever.
Legends Never Die - Leagueoflegends + Ethnic Instruments by Belle Sisoski [1]. And no, I've never played LoL, I probably never will, and I haven't seen that series based on it (Arcana or something?) either.
Also, I haven't checked what Juno Reactor do these days, but their old work is phantastic. My fav show of them is Juno Reactor – Shango Tour 2001 Tokyo [2].
For electric violin, I love Ed Alleyne-Johnson [3]. Never seen him live (I'm not from UK) but I own a couple of his earlier works. It reminds me of that time when my dad was in his final years of his lives, and when he finally passed away. Makes me cry every time.
You might also enjoy Beardyman, if you haven't run across him yet. Does techno and other genres with nothing but his own voice and a shedload of ipads: https://www.youtube.com/watch?v=DYVUlx7BhhI
Nathan Flutebox Lee and Beardyman @ Google, London [1] is one of my favs. At the time it was available on 'Google Video' before they acquired YouTube. So I don't have a link to the orig. post. SPOILER: especially that theme with the Godfather when he says Google is just epic and balls.
> There are always some people who get extremely defensive whenever I say that techno didn't click for me until I heard this kind of "techlow" music. Specifically about the part where I think that the reason is also a human expression problem, because of limitations imposed by the electronic media used.
I guess the part people don't like hearing is the implication techno is somehow not expressive. I'm not sure that it lacks expressiveness, but it is certainly more "controlled" than traditional music. When I first heard techno as a teenager in the 90s, my mind was blown. I remember exactly where I was the first time I heard Underworld [1], Photek [2], and Autechre [3]. I think I was attracted to these sounds _because_ they were so different. I think it's hard for electronic music fans like myself to accept the idea that it isn't expressive _because_ it is so different. Isn't it just a different kind of expression?
Still, people like what they like. I'm glad you found a version of dance music that works for you. I've long since moved on being judgmental about people's musical tastes. I think it's just wonderful that music exists at all!
> I guess the part people don't like hearing is the implication techno is somehow not expressive.
I think of it more like a painter's palette: every instrument and tool involved in creating music has a different set of colors to choose from, and can also filter some "colors" out if we think of things like audio processing filters.
The tools and techniques typically used to produce techno filter out "colors" that feel essential to me to connect with a song, and yeah, that "controlled" aspect of it is probably a large part of that. That doesn't mean it's not expressive, it's just expressive in a way that I struggle to connect with.
EDIT: funny enough I actually have protanomaly, so my choice of analogy is slightly ironic there. Some visual art and design out there objectively looks terrible from my subjective experience, since the colors look completely off. But that doesn't mean I'm saying the art is objectively bad.
The whole thing about people being defensive is interesting. I love techno, but anyone who has learned other styles of music recognizes the repetitiveness and quirks of a lot of techno and some other electronic genres.
They do a great job with changing their timbre and tones but often ignore a bunch of other factors that make music interesting. Whether that is the rarity of time signatures other than 4/4, the way certain rhythms are locked into certain genres, the choices of keys used, the limited or missing chords, etc.. at some point you start hearing two electronic songs that sound totally different at a superficial level and you realize they're incredibly derivative of each other.
Nice addition! First time I heard of them and I'm liking what I'm hearing so far.
And just to clarify: I don't dislike electronic instruments. I just think that on some subconscious level the human brain can detect other humans playing a live instrument. Like there's something "embodied" in the sound that is likely missing from a pure electronic instrument. And I needed that element to "unlock" access to techno.
Yep, there's a reason we have the industry term "humanization" in sound design, composition and arrangement.
Tons of work has been done on various modes of humanization by trying to parameterize and modulate these aspects over time. Timing accuracy, velocity variance, chance, etc.
A well-played instrument certainly feels like someone speaking and expressing themselves to you. There are attempts to capture this with MPE instruments such as the Osmose, or Imogen Heap's MiMU gloves.
Hmmm, I disagree, having played electric and acoustic guitars for over two decades and begun learning piano and synths for the first time in 2025.
For one, you can’t easily play two melodies simultaneously across several octaves, using both of your hands, with an electric guitar.
Stringed electronic instruments do have their advantages, but so do the others. Each music making thing has its place in the spectrum.
Two books that have helped me greatly in my musical life, in case people haven’t heard of them, are The Listening Book, and Bridge of Waves, by W.A. Mathieu.
There are certainly guitarists who can play simultaneous melodies.
If you're limiting to a 6 string guitar the distance between the two melodies would be limited compared to a piano but guitars don't have to be limited to 6 strings.
Classical guitar is full of this kind of thing.
Having taken piano lessons but being more into guitar I think the thing is almost all people who play piano are introduced to this and it is a core concept in far more piano music than guitar music. But it is not impossible on guitar, and many works for piano that get adapted to guitar require the player to do so.
E.x. there are plenty of players who have studied and played the Well Tempered Clavier on guitar.
You can play with both hands on a Chapman stick, right hand can do the bass, the left the melody/chords or vice-versa (Chapman stick is played tapping the strings with both hands)
Guitars certainly have a more intimate connection between the touch of fingers and the sound, including the bending of the tone, one of Hendrix’s virtuosities.
Keyboards can approach that with polyphonic touch keys like the Hydrasynth (lean into keys, pressing them harder, for bending the tone in a configured patch), sustain pedals, and pitch bend/modulation controls, but not the nuanced touch of skin on a vibrating string.
I think synth guitars exist, too, but don’t know anything about them. The pedalboards are enough, maybe :)
> There is an unsolvable disconnect between what the performer's actions and their audience
Is that really true though? If I watch a cellist play I can pretty clearly see all the things they are doing and it will correlate neatly to the timbre of the sound.
Secondly I think it's important to note the tube amp and the guitar are seperable, and I don't think that their connection is particularly magical. I can reamp a sound from my synthesizer (or maybe a keytar?) into a guitar chain, and if I manipulate the mic and other controls in the same way I might manipulate the pickup, I can also get all manner of interesting feedback effects. My inputs will have different harmonic characteristics of course, and the tube amp's effects are mostly transformations of harmonics; you'll still get some cool tones and they will be subject to a lot of the same rules as if a guitar was being played.
They're talking about electronic instruments there. The comment is about how electronic instruments don't generally match the physical expressiveness of acoustic instruments (like the Cello).
I'm talking about electronic instruments how they are deficient in expressiveness compared to your cello example.
> Secondly I think it's important to note the tube amp and the guitar are seperable, and I don't think that their connection is particularly magical. I can reamp a sound from my synthesizer (or maybe a keytar?) into a guitar chain, and if I manipulate the mic and other controls in the same way I might manipulate the pickup, I can also get all manner of interesting feedback effects.
The story is not quite so simple. Your synthesizer is going to have a buffered output so it wont have the complex impedance loading interactions with the amplifier as the guitar pickup.
This is actually critical to how early distortion effects such as the classic Fuzzface work and imo is essential for the kind of complex timbres you can produce with a guitar + tube amp.
In fact you can take an electric guitar, put a buffer pedal in the chain between your fuzz pedal and amp and completely destroy the ability to produce wild feedback and distortion.
I'm a guitarist, but there's nothing particularly magical about a high impedance signal, other than they tend to lead to noise and make really obnoxious things matter, like how low capacitance your cable is. Also, a TON of modern guitars are low(ish) impedance out because they use active pickups.
The pedals and system being dependent on the high impedance was always a bug, not a feature, and make the setup incredibly dependent on variables that really wouldn't be that hard to just buffer then recreate deterministically. Like, if your pedal should react to that impedance just buffer the front, put a big inductor (or a transformer using only half, or, - and I've actually seen this - just a whole guitar pickup) in the pedal. Then you're not dependent on the pickups of the guitar or the capacitance of cable or anything else and you can make sure the effect sounds good regardless of pickup type.
A Fuzz Face works the way it does because it actually gets affected by the guitar's impedance changing as you work the knobs on the guitar and pick differently. The Fuzz Face has minimal input filtering, the guitar's knobs actually change the bias of the first transistor IIRC and cause massive changes in sound.
If you stick a buffer in front of it that interaction is gone and there is nothing you can stick after the buffer to bring it back. You pretty much have to plug the guitar directly into a Fuzz Face for it to work as intended. There are even constant arguments about putting the Wah in front of the FF or after it. I'm not sure if the article even has it right or whether Hendrix did it differently at different times. Other articles show a different order of the effects.
There are other fuzz circuits that behave differently and work better with buffers and would be more uniform when used with other types of instruments or with electric guitars with active pickups (which are buffered).
E.x. I have a Tone Bender and have had several Fuzzes in the "Big Muff" category along with one that was based on the Fox Tone Machine. The Tone Bender and Big Muff can't clean up at all like the Fuzz Face via the guitar controls, and IIRC the Fox Tone Machine is somewhere in the middle. The Fuzz Face when setup correctly is really quite amazing as you can go crystal clear to crushing fuzz with your volume knob on the guitar. When you've tried it you realize Jimi Hendrix was doing it constantly in an amazing way.
That is going to be something like a transformer to step down your line level signal and some series resistance to match the load to help drive the amp.
An actual coil pickup has reactive impedance that is frequency dependent and will result in a more complex interaction between the devices.
> The pedals and system being dependent on the high impedance was always a bug, not a feature
Sure if you think like an engineer, but everything you are complaining about is what allows someone like Jimi Hendrix to do what he did with a guitar.
Eminently separable, but it's good to be aware of the tradeoffs.
Not magic at all, physics.
It's good to understand that high-impedance is not the biggest deal, but one thing about the magnetic pickups that not everybody realizes is the way that plugging directly into a tube (pre)amp basically magnetically couples the strings to the grid of the input tube.
And that grid has no further physical connection to any other components in the circuit, not even within the same tube, except for clouds of electrons and the flow that occurs among the electrodes.
That way your music basically starts out being sprayed through space directly from the strings which create the magnetic signal.
The thing about high-impedance is the way the relatively minuscule resistor values between the amp's input jack and the input grid's tube pin are so insignificant by comparison to the pickup internal impedance, that resistance might as well be zero.
The only reason there is a resistor in between the input jack and the input grid anyway is to accommodate a high-impedance input with better stability under wider conditions than otherwise.
Now you can get a righteous sound with any number of pedals in between the guitar & amp, especially if the battery power is used to boost the signal to more than the guitar puts out magnetically, and it's been the mainstream for so long people almost never consider doing it any other way.
It's just not the same magnetic coupling from the strings to the tube, you can't have both unless it's a tube pedal.
I've designed lots of solid state circuits too and there is plenty of excellence when coupling the same magnetic pickup directly to a silicon or germanium crystal lattice and going from there. Whether it's pedals or a pure solid-state amp. Instead of using any tubes at all.
Also some people prefer having tubes only for the audio output section, coupled to the magnetic speakers through the antique-style audio output transformer the old-fashioned way.
they're comparing an electric guitar to electronic instruments, like midi keyboards. An electric cello would be the same thing as an electric guitar in this context.
> have a fundamental problem with human expression.
How up to date is this opinion of yours? Expression on guitar is pretty intuitive, but modern electronic instrument manufacturers have been working on this problem and created modes of expression that definitely solve this problem.
For example, EWIs allow you to use breath control for expression with many of the same techniques available on actual wind instruments. Also many synths now have features like polyphonic aftertouch, pitch/mod wheels, which allow you to add expression to a note while it is playing. Apps and hardware exist which allow you to use novel methods of capturing motion or other forms of expression. And most modern synths/midi controllers allow you to decide what parameters are affected.
> Then on top of all that it is so incredibly physical
That's an affectation. I can stand on my tiptoes and close my eyes when bending up a note on the synth the same as I can on the guitar. Neither affects the sound, and both are a conscious decision to project an appearance of "I'm really shredding"
> With an electric guitar you get the physicality and dynamism of an acoustic instrument with the complex timbres and extended technique possibilities of an electric/electronic instrument.
That can apply to any instrument once you "electrify" it. What makes a guitar more expressive than a cello or trumpet with a pickup/mic running through effect processing? I play guitar, keys and trumpet, and while I agree that a casio keyboard has limited expression options, your opinion doesn't sound researched.
> What makes a guitar more expressive than a cello or trumpet with a pickup/mic running through effect
The difference lies in the pickup! On those other instruments you will be using a contact mic (piezo-transducer) wheras the solid body guitar is using an inductive coil.
The contact mic is going to pickup only physical resonance whereas the the coil is measuring an electromagnetic field. Plucking the steel string induces a change in voltage in the coil. This means that the coil can pickup all sorts of interesting electromagnetic interference from the tube amplifier that is all frequency dependent and involve that in whatever feedback loops are occuring.
So the difference in expression is in the oscillator type?
Are we using the same definition of "expressive"? I play synth, guitar and trumpet, and the trumpet is by far the most expressive of the three, both musically and physically. You have basically all the same options for expression that you do with a human voice (vibrato, dynamics, glissando, etc) plus the expressive techniques offered by the instrument mechanics (for example: half-valving, trills, lip slurs, using a plunger as a LPF).
Sure you need a microphone or contact mic, but again that's just your source of your oscillator. After that, sound design is just sound design. I'm not saying everybody should play electric trumpet, but it's just absurd to make blanket statements like "electric guitar is the most expressive electronic instrument".
My whole point is about comparing electric/electronic instruments. I NEVER said that the guitar is the best acoustic instrument. I explained how there is a difference between the way a guitar works with a coil pickup versus other instruments with a contact mic but you don't care.
I don't really want to keep arguing with you. You can win the thread if you want. Congratulations!
> I NEVER said that the guitar is the best acoustic instrument.
Me neither? This thread was about expressiveness. I asked the OP why they think guitar is more expressive than anything else and you responded with information about how pickups interact with amps.
Great argument -- but I'd also counter that "the turntable" (i.e. in the hands of experts like Q-Bert, Craze, Rob Swift, Jazzy Jeff and others) fits this quite well -- especially re your "have the audience understand what he is doing argument"
Haha that is a great highly expressive counter example! However, as far as versatility of sound I still think the guitar+tube amp wins as you have access to all of western music theory and techniques as its still a traditional string instrument.
There have been some interesting keyboard input devices coming out which allow for more expression than normal piano keys, using a sort of hack to the MIDI system called MPE - MIDI Polyphonic Expression. For example the Seaboard Rise or the Osmose. Depending on the instrument it's possible to do per-note pitch bends, change pressure while holding notes, perform vibrato etc. Visually the physical movement is not as interesting as electric guitar though, so yours probably still wins.
> All other electronic instruments, with the one exception being the Theramin, have a fundamental problem with human expression. There is an unsolvable disconnect between what the performer's actions and their audience.
Electric bass? Heck, even in synthesizers, you have the EWI or the Haken Continuum.
Guitar (and bass) are obviously and far and away the most successful, but it does a disservice to a number of wonderful inventions to say they're the only ones. Just look at what the Japanese band T-SQUARE does with the EWI to see people innovating at the edges.
I feel like the synthesizer--CMI Fairlight, Moog anything, Synclavier, PPG Wave, and just the general concept of modular synthesis--are pretty staunch competitors. Yours is certainly a fun and fair take, and arguably the electric guitar+tube amps birthed so many genres (blues, soul, funk, rock, punk, metal, etc) where as synthesizers remained pretty niche with their contribution to experimental music and pop music, mixing in with rock funk and disco, and the titan of EDM that grew out of that.
This comment is a love letter to electric guitar. I adore it. Consider reading “Desolation Road” by Ian McDonald. I don’t want to spoil any of it, and perhaps science fiction isn’t your cup of tea, but at one point there is a character on Mars with a 700-year-old strat, and you can tell Ian McDonald loves the guitar as much as you do.
You could argue that it's one of the most versatile instruments, sure. "Greatest" is completely subjective.
But is it one of the most versatile instruments? You can do signal transforms with any kind of audio input, although it's done more with the electric guitar than any other instruments.
I would say it in practice, it has the most versatile sonic profile.
A modular synth is more versatile in terms of enumerated signal transformations. Its the ability to be expressive with those signal transformations that makes the guitar+tube amp what it is.
With the right interface, I think the synth can be more expressive. Look at the Haken Continuum or ExpressiveE Osmose - both can be used with something like the Expert Sleepers FH-2 to get MPE data to the modular.
I do see your point, and agree the amount of articulation you can do with guitar is hard to beat, but I do think a synth can win, if the setup is built for it.
Synths with mod wheels are the bomb, I used to have a roland that had a pitch wheel for bends and then push it for tremolos, vibratos and such, and way more voices, envelopes etc and that was a few decades ago and I'm sure that nowadays guitars are not going to compete except at one thing, making guitar sounding noises, you can get guitary sounds but somehow they come off to me to be too clean and lack the slop that various fingerings produce lol
Synth music elevated electric bound tones to anything ever heard.
I remidn you that most of the rock and roll and rock music was about speed and mimicking the sound of a rumbling car engine, as it was a symbol of the freedom in America, being able to run away from your toxic communities to find yourself better anywhere else.
That was the message for the young with rock and roll: a speedy engine for your ears.
Electronic music was like replacing a car with UFO evoking you a space travel.
With the progressive subgenre of techno music you got the same feeling, but with no subtle hints. Heck, one of the most known songs in Spain ever, "Flying Free", literally remixes the sounds of drifting cars between the melodies, making the listener really happy in a very direct way as tons of youngs in the 90's got into the outskirt night clubs... by car. So they felt as driving an infinite highway rave with no end for days.
The amusing thing (to me at least) is that while the DX7 gave users almost infinite options as to how they could create and shape sounds, if you know what to listen for you'll hear the E PIANO 1 and BASS 1 presets an about half of all mid 80s hits. Turns out when they gave musicians a tool with immense flexibility, many of them still chose to use two of the (admittedly great) preset sounds.
Apparently this happens every time. A sample disk included with tracker software was used in hundreds if not thousands of modules and pretty much defined the sound of the Amiga.
Yup. FM Synthesis is challenging enough to implement, but doing so on the DX7's interface is a whole other level of frustrating. It's far from the hands-on interfaces of most subtractive or modular synthesizers.
The DX-7 FM synthesis opened the door to a pretty narrow but interesting range of sounds, bells and brass, which people loved and it was a ripsnorting success for a time, but it didn't displace subtractive analog synths and people aren't exactly playing FM synthesizers any more, while they are now heavily back into analog subtractive. of course there are also romplers and samplers etc. and those can achieve sounds that FM did, but it's hard to call the DX-7 any type of be-all end-all.
I watched Wayne Coyne of the Flaming Lips do something similar with some kind of "I don't know what" controller, it was some kind of input in his microphone stand. As he moved it around, the sound and projection changed.
I remembered learning about similar MIDI controllers when I was in school.
Imogen Heap created a set of gloves that transform finger flexing and wrist movement into midi signals you can use in whatever way your performance software allows.
Similar to the Theremin is the ondes Martenot. Jonny Greenwood (Radiohead) describes it as a "very accurate Theremin".
You can hear it particularly on "Where I End and You Begin" from Hail to the Thief. Ed O'Brien compliments its sound using an EBow (back before he had the sustainer) in that song.
Yes! I always think first of How to Disappear Completely, which I think was the first song he used it on. I remember watching some concert in college from the Kid A days, and he would have like 3 Ondes Martenot players on stage with them, crazy stuff from the band that wrote Creep like 5 years earlier.
This video might be the best explanation for Creep, which while they never wanted to record it; they do acknowledge it set them up to be able to do everything since. But a lot of Pablo Honey is mediocre.
This is also cool; live version of How to Disappear Completely at the start and one of the few Jonny interviews where he speaks. From a documentary on the Ondes Martinot.
Very cool! I'm confused why you didn't link the video of Radiohead wearing all black and playing at MTV Spring Break :-).
I was VERY deep into Radiohead in college (early 2000s), and love all of these videos; I'm pretty sure I've seen that 2nd video footage around the time of the Kid A tour, but the dates are wrong. Kid A is probably one of my 5 deserted island albums.
I was never an MTV viewer, and came by Radiohead through JJJ radio (Australia).
My ah-ha moment was the release of Ok Computer. First heard the album driving in my car and was instantly blown away. Before that Creep, the singles from the Bends were just some of the better BritPop stuff I was into at the time.
Saw them early '98 on the tour which cemented them as my favourite band. They played Creep, but the high point was going out on Street Spirit.
>"All other electronic instruments, with the one exception being the Theramin, have a fundamental problem with human expression. There is an unsolvable disconnect between what the performer's actions and their audience."
Look at Roli Seaboard, it has insane amount degrees of freedom / expression
No two trumpet players sound the same. I know who is playing just by the tone. Listen to Herb Alpert / Al Hirt / Maurice Andre, all playing the same instrument, but wildly different.
I generally reserve the word electronic to mean something with a microcontroller or discreet logic components. Electronic guitars exist, but they're basically differently shaped keyboards.
I often lament the lack of other electric instruments.
I have come around to the idea of guitars being electronic instruments. Strings are the original oscillators. Once they become electrical signals it isn't clear to me how they differ categorically from any other electric instrument. There are an almost infinite number of pedals, many of which offer things like filters, LFOs, and other synthesis stalwarts. You could even make the guitar a controller for more traditional synthesis work.
100% vibe coded buyer beware. I was annoyed that I couldn't easily toggle between personal and work accounts and maintain all my same comands, hooks, etc without fully logging out and back in.
This tool lets you manage multiple accounts in `~/.claude/envs` and activate them per shell session.
Music recommendation is such a hard problem. There are all these seemingly obvious relationships you can map between bands to create a big graph that looks good but that almost never captures what goes on when a human with deep music knowledge recommends music. Often the best recommendations have no obvious relationships to the band you like.
I played around with this tool a bit and didn't find it any better then other more traditional music discovery tools, that is to say not very effective.
For example, I entered John Zorn and was recommended a bunch of John Zorn's bands. I entered The Residents and got The Pixies :/
I think its more effective to click around on Music Brainz and Wikipedia.
However for some genres that approach won't work, since they are either too new, too niche, the genre-description says too little about the actual songs etc. If this is the case another tip is to go at it from the production/distribution/scene side. So you check music mixed by the same audio engineer, released on the same record label, made in the same city during the same time. This can get you surprisingly far.
There is no real shortcut to doing it yourself, part of appreciating that music is often also to understand the context within which it was made.
Great suggestion. Likewise exploring who played shows with who is another great way relation for music discovery. Often you can find radically different bands that were part of the same social scene but which you can relate to.
There are so many ways music can be connected that aren't accounted for by genre labels or "sounds like."
I agree this works really well and do it, this is essentially what I meant when i said 'clicking around music brainz and wikipedia.' That said I wouldn't be satisfied with this as the only way i could discover new music. There are so many dimensions that don't get codified in wikipedia or music brainz.
You seem knowledgeable about this.. Care to test my old project for music recommendation? I built it by looking at co-occurrence of artists in Spotify playlists, which gives me word2vec-style vectors, and then its just kNN.
No login needed, just enter some artist names and see what you get:
Very interesting, I've been working on a similar project (using word2vec to learn vectors using playlist data), but using songs instead of artists as the 'words'.
The main bottleneck at this point is the volume of data - many songs I'm interested in only are only represented in a handful of playlists, and . Evaluation at any useful scale is also quite difficult. For somewhat obvious reasons, in our AI era Spotify has become quite skittish about letting third parties gain access to their data at scale...
This is pretty neat, shows good relationships especially on the edgecases where an artist has a very unique sound that other artists dont mimic, but otherwise people who typically like that artist will like others.
Would be very cool if it supported smaller artists than it currently does, because imo thats how you start surfacing emerging talent.
Nothing beats humans with great music tastes and deep knowledge. I’ve yet to find any form of recommendation engine that has surprised and delighted me the way humans have.
This tool might unearth something interesting, but I find it sus that it’s recommended the same artist (Adrianne Lenker) when I asked about Aimee Mann and Steven Jessie Bernstein.
Reading your comment and all its subtree made me realize there's another difficulty to the problem: what atomic unit do you use? Tracks, albums, artists?
One might argue that "artist" isn't granular enough, since lots of (most?) artists change sound during their career. For the two others, I think recommendations should be trained and given separately (segregated, if you will) between people who listen to albums and those who only care about tracks/singles.
the problem is there's different ways that people engage with music. Some listen to the lyrics and want to have an emotional connection, some view it as exploratory art, others wear it as an identity, some are just looking for similar sounds ... You need to have a routing system that can match the recommender to the style of engagement.
Microtonal polyrhythmic looping absolute madness. (you can hear some Primus and King Gizzard and the Lizard Wizard kinda sounds in there, if they also tickle your fancy)
Residents -> Pixies is certainly an odd recommendation. Having said that, where _can_ you go from The Residents? Daniel Johnston?
> Residents -> Pixies is certainly an odd recommendation. Having said that, where _can_ you go from The Residents? Daniel Johnston?
I would be truly impressed if a recommendation engine took me from The Residents to Balinese Gamelan. My aunt plays in a Gamelan orchestra with one member of the Residents and learning that somehow made so much sense to me. This are the kind of out of pocket recommendation that an engine will never capture.
Interesting. Spotify works almost perfectly for my discovery needs. I just pick a track I know that fits my mood, then use the (3-dot menu) "Go to Radio" option, which leads to a playlist that usually includes tracks and/or artists new to me. It's been a reliable discovery mechanism for me for many years. Also, there's a new feature I first saw within the last week, a "non-personalized" option that increases the "new to me" ratio.
the "you might also like" for a given artist is usually the most generic related artists - for anything remotely related you'll get basically the same list which is the middle of the venn diagram of everyone who listens to them
I always find this interesting… Spotify is phenomenal for me - about every third Monday Discovery playlist has two or three hits, which feels like a pretty solid ratio, at this point. YouTube has never suggested a single thing I cared for.
I wonder if it’s a curation thing? I’ve been with Spotify since the first day it was available, and rarely use YouTube. I haven’t had a good music ratio as good since newsgroups and (real) forums a decade ago, which were a different form of curation.
Rest in peace.
reply