Hacker Newsnew | past | comments | ask | show | jobs | submit | galaxyLogic's commentslogin

The reason we do things is because of our biological needs, really to spread our DNA. AI has no "reason to do things", unless we program one into it. We could do that and have super-capable "worm" malware that would be hard to get rid of. But AI by itself has no "driving force". It does what it's programmed to do, just like us humans. AI can be used in weapons, and such weapons can be hugely lethal. But so is atomic bomb. AI by itself will not "take over". It could be used by some rogue nation to attack another nation. But surely that other nation would then use AI to defend itself. This is just to say I'm not afraid of AI, I'm afraid of people with fascistic leanings.

Isn't his like it is in many relational databases, you can query them about the tables in them?

The key difference is that it's not just about schema metadata (tables, indexes, views, columns, etc...). PostgreSQL is fabulous regarding this. Even native types are part of the catalog (pg_catalog).

Things are great in your DB... until they aren't. The post is about making observability a first-class citizen. Plans and query execution statistics, for example, queryable using a uniform interface (SQL) without the need to install DB extensions.


Thank you and yes!

By making the entire architecture of the database visible via system objects - you allow the user to form a mental model of how the database itself works. Instead of it being just a magic box that runs queries - it becomes a fully instrumented data model of itself.

Now, you could say: "The database should just work" and perhaps claim that it is design error when it doesn't. Why do I need instrumentation at this level?

To that I can say: Every database ever made makes query planning mistakes or has places where it misbehaves. That's just the way this field works - because data is fiendishly complicated - particularly at high concurrency of when there is a lot of it. The solution isn't (just) to keep improving and fixing edge cases - it is to make those edge cases easy to detect for all users.


This sounds conceptually similar to performance_schema [1] in MySQL or MariaDB, which is a built-in feature originally introduced in MySQL 5.5 (2010). Or perhaps the easier-to-use sys schema [2], which wraps performance_schema among other things, introduced in MySQL 5.7 (2015).

It's great to have that observability functionality, but I don't really understand the purpose of writing a new DBMS from scratch to add this though. Why not get something merged into Postgres core?

[1] https://dev.mysql.com/doc/refman/8.4/en/performance-schema.h...

[2] https://dev.mysql.com/doc/refman/8.4/en/sys-schema.html


Merging to PostgreSQL core for something that needs to run on top of a Petabytes of data in the cloud, on Iceberg, with an advanced query planner and a high speed SIMD engine.... AND trying to squeeze into the "pg_" naming mess?

I don't think so...

And yes, its conceptually similar to MySQL and also conceptually similar to SQL Servers implementation from 1997. That's by the design. Obviously, we are not writing a new DBMS from scratch just to add system objects.

Have a look at some of the other blogs on that site to see what we are up to. Basically, we want to give you an experience that resemblers that instrumentation you got used to from the on-premise databases, but one that can run on top of Iceberg in the cloud.


> something that needs to run on top of a Petabytes of data in the cloud, on Iceberg, with an advanced query planner and a high speed SIMD engine

Part of my confusion was that this blog post makes no mention whatsoever of any of those things!

It gave me the (incorrect) impression that this observability functionality was the purpose of the product. And it is worded in a way which makes no mention of prior art in built-in DBMS observability.

Looking at the other threads here, I don't think I'm the only one who was confused about that. A couple intro paragraphs to the product might help a lot.


That's fair feedback and I shall take that into account in future blogs.

Thanks for letting me know - you can stare yourself blind on that stuff


I think what's lacking in LLMs creating code is they can't "simulate" what a human user would experience while using the system. So they can't really evaluate alternative solutions tothe humna-app interaction.

We humans can imagine it in our mind because we have used the PC a lot. But it is still hard for use to anticipate how the actual system will feel for the end-users. Therefore we build a prototype and once we use the prototype we learn hey this can not possibly work productively. So we must try something else. The LLM does not try to use a virtual prototype and thne learn it is hard to use. Unlike Bill Clinton it doesn't feel our pain.


I don't think there is any mystery to what we call "consciousness". Our senses and brain have evolved so we can "sense" the external world, so we can live in it and react to it. So why couldn''t we also sense what is happening inside our brains?

Our brain needs to sense our "inner talk" so we can let it guide our decision-making and actions. If we couldn't remember sentences, we couldn't remember "facts" and would be much worse for that. And talking with our "inner voice" and hearing it, isn't that what most people would call consciousness?


This is not nearly as profound as you make it out to be: a computer program also doesn't sense the hardware that it runs on, from its point of view it is invisible until it is made explicit: peripherals.

You also don’t consciously use your senses until you actively think about them. Same as “you are now aware of your breathing”. Sudden changes in a sensation may trigger them to be conscious without “you” taking action, but that’s not so different. You’re still directing your attention to something that’s always been there.

I agree with the poster (and Daniel Dennet and others) that there isn’t anything that needs explaining. It’s just a question framing problem, much like the measurement problem in quantum mechanics.


another one that thinks they solved the hard problem of consciousness by addressing the easy problem. how on earth does a feedback system cause matter to "wake up"? we are making lots of progress on the easy problem though

This is not as good a refusal as you think it is. To me (and I imagine, the parent poster) there is no extra logical step needed. The problem IS solved in this sense.

If it’s completely impossible to even imagine what the answer to a question is, as is the case here, it’s probably the wrong question to pose. Is there any answer you’d be satisfied by?

To me the hard problem is more or less akin to looking for the true boundaries of a cloud: a seemingly valid quest, but one that can’t really be answered in a satisfactory sense, because it’s not the right one to pose to make sense of clouds.


> If it’s completely impossible to even imagine what the answer to a question is, as is the case here, it’s probably the wrong question to pose. Is there any answer you’d be satisfied by?

I would be very satisfied to have an answer, or even just convincing heuristic arguments, for the following:

(1) What systems experience consciousness? For example, is a computer as conscious as a rock, as conscious as a human, or somewhere in between? (2) What are the fundamental symmetries and invariants of consciousness? Does it impact consciousness whether a system is flipped in spacetime, skewed in spacetime, isomorphically recast in different physical media, etc.? (3) What aspects of a system's organization give rise to different qualia? What does the possible parameter space (or set of possible dynamical traces, or what have you) of qualia look like? (4) Is a consciousness a distinct entity, like some phase transition with a sharp boundary, or is there no fundamentally rigorous sense in which we can distinguish each and every consciousness in the universe? (5) What explains the nature of phenomena like blindsight or split brain patients, where seemingly high-level recognition, coordination, and/or intent occurs in the absence of any conscious awareness? Generally, what behavior-affecting processes in our brains do and do not affect our conscious experience?

And so on. I imagine you'll take issue with all of these questions, perhaps saying that "consciousness" isn't well defined, or that an "explanation" can only refer to functional descriptions of physical matter, but I figured I would at least answer your question honestly.


I think most of them are valid questions!

(1) is perhaps more of a question requiring a strict definition of consciousness in the first place, making it mostly circular. (2) and especially (3) are the most interesting, but they seem part of the easy problem instead. And I’d say we already have indications that the latter option of (4) is true, given your examples from (5) and things like sleep (the most common reason for humans to be unconscious) being in distinct phases with different wake up speed (pun partially intended). And if you assume animals to be conscious, then some sleep with only one hemisphere at a time. Are they equally as conscious during that?

My imaginary timeline of the future has scientific advancements would lead to us noticing what’s different between a person’s brain in their conscious and unconscious states, then somehow generalize it to a more abstract model of cognition decoupled from our biological implementation, and then eventually tackle all your questions from there. But I suspect the person I originally replied to would dismiss them as part of the easy problem instead, i.e. completely useless for tackling the hard problem! As far as I’m concerned, it’s the hard problem that I take issue with, and the one that I claim isn’t real.


I much agree, especially on the importance of defining what we mean by the word "conscicousness", before we say we cannot explain it. Is a rock conscious? Sure according to some deifinition of the word. Probably everybody would agree that there are different levels of consciousness, and maybe we'd need different names for them.

Animals are clearly conscious in that they observe the world and react to it and even try to proactively manipulate it.

The next level of consciousness, and what most people probably mean when they use the word is human ability to "think in language". That opens up a whole new level, of consciousness, because now we can be conscious of our inner voice. We are conscious of ourselves, apart from the world. Our inner voice can say things about the thing which seems to be the thing uttering the words in our mind. Me.

Is there anything more to consciousness than us being aware that we are conscious? It is truly a wondrous experience which may seem like a hard problem to explain, hence the "Hard Problem of Consciousness", right? But it's not so mysterious if we think of it in terms of being able to use and hear and understand language. Without language our consciousness would be on the level of most animals I assume. Of course it seems that many animals use some kind of language. But, do they hear their "inner voice"? Hard to say. I would guess not.

And so again, in simple terms, what is the question?


This is precisely the matter, I wholeheartedly agree. The metacognition that we have, that only humans are likely to have, is the root behind the millennium-long discussions on consciousness. And the hard problem stems from whatever was left of traditional philosophers getting hit by the wall of modern scientific progress, not wanting to let go of the mind as some metaphysical entity beyond reality, with qualia and however many ineffable private properties.

The average person may not know the word qualia, but “is your red the same as my red” is a popular question among kids and adults. Seems to be a topic we are all intrinsically curious about. But from a physical point of view, the qualia of red is necessarily some collection of neurons firing in some pattern, highly dependent on the network topology. Knowing this, then the question (as it was originally posed) is immediately meaningless. Mutatis mutandis, same exact argument for consciousness itself.


Talking of "qualia" I think feeling pain is a good example. We all feel pain from time to time. It is a very conscious experience. But surely animals feel pain as well, and it is that feeling that makes them avoid things that cause them pain.

Evolution just had to give us some way to "feel", to be conscious, about some things causing us pain while other things cause us pleasure. We are conscious of them, and I don't think there's any "hard question" about why we feel them :-)


How about AI-generated widgets? I just tell AI what I want to see in a widget and it creates it?

Maybe simply "Show news about this topic"?


I think that's what Google Disco is:

https://www.theverge.com/tech/842000/google-disco-browser-ai...

Maybe? I really struggled to understand this product from the description and screenshots alone.


But placebo works, right? But it only works if you don't know that it is placebo you are getting.

Placebos often work, even when a placebo is known to be a placebo.

I use rain-sounds or white noise plus noise-cancelling headphones to drown out my neighbor's TV. It bugs me that I have to hear advertisements coming over the wall when I wake up. If I'm really pissed off I turn on some reggae music with good bass. It always calms me down.


This makes me think how AI turns SW development upside down. In traditonal development we write code which is the answer to our problems. With AI we write questions and get the answers. Neither is easy, finding the correct questions can be a lot fo work, whereas if you have some existing code you already have the answers, but you may not have the questions (= "specs") written down anywhere, at least not very well, typically.


At least in my experience the AI agents work best when you give them a description of a concrete code change like "Write a function which does this here" rather than vague product ideas like "The user wants this problem solved". But coming up with the prompts for an exact code change is often harder than writing the code.

> both the actor model (and its relative, CSP) in non-distributed systems solely in order to achieve concurrency has been a massive boondoggle and a huge dead end.

Why is that so?


Well, lots of people have tried it and spent a lot of money on it and don't seem to have derived any benefit from doing so.


Actors can be made to do structured concurrency as long as you allow actors to wait for responses from other actors, and implement hierarchy so if an actor dies , its children do as well. And that’s how I use them! So I have to say the OP is just ignorant of how actors are used in practice.


> Actors can be made to do structured concurrency as long as you allow actors to wait for responses from other actors

At which point they're very much not actors any more. You've lost the deadlock avoidance, you can't do the `become`-based stuff that looks so great in small demos. At that point what are you gaining from using actors at all?


If you don't think actors are useful just because you need to wait for responses, I guess you've never used actors. That's just so implausible someone would say that if they just, you know, did it.

To adapt the analogy from the link in the root comment, this is akin to saying "`goto` can be made to do structured programming as long as you strictly ensure that the control flow graph is reducible". Which is to say, it is a true statement that manages to miss the point: the power of both structured programming and structured concurrency comes from defining new primitives that fundamentally do the right thing and don't even give you the option to do the wrong thing, thus producing a more reliable system. There's no "as long as you...", it just works.


Isn't this a bit theoretical since most real world systems are distributed these days, using a browser as GUI?

Except Akka in Java and for the entirety of Erlang and its children Elixir and Gleam. You obviously can scale those to multiple systems, but they provide a lot of benefit in local single process scenarios too imo.

Things like data pipelines, and games etc etc.


If I'm not mistaken ROOM (ObjecTime, Rational Rose RealTime) was also heavily based on it. I worked in a company that developed real time software for printing machines with it and liked it a lot.


I've worked on a number of systems that used Akka in a non-distributed way and it was always an overengineered approach that made the system more complex for no benefit.


Fair, I worked a lot on data pipelines and found the actor model worked well in that context. I particularly enjoyed it in the Elixir ecosystem where I was building on top of Broadway[0]

Probably has to do with not fighting the semantics of the language.

[0] https://elixir-broadway.org/


Really depends of the ergonomics of the language. In erlang/elixir/beam langs etc, its incredibly ergonomic to write code that runs on distributed systems.

you have to try really hard to do the inverse. Java's ergonomics, even with Akka, lends its self to certain design patterns that don't lend itself to writing code for distributed systems.


It is political. Designing everything around cars benefits the class of people called "Car Owners". Not so much people who don't have the money or desire to buy a car.

Although, congestion pricing is a good counter-example. On the surface it looks like it is designed to benefit users of public transportation. But turns out it also benefits car-owners, because it reduces traffic jams and lets you get to your destination with your own car faster.


>Designing everything around cars benefits the class of people called "Car Owners".

Designing everything around cars hurts everyone including car owners. Having no option but to drive everywhere just sucks.


But the AD for my Cadillac says I’m an incredible person for driving it, that cant be wrong.


No, it benefits car manufacturers and sellers, and mechanics and gas stations.

Network/snowball effects are not all good. If local businesses close because everybody drives to WalMart to save a buck, now other people around those local businesses also have to buy a car.

I remember a couple of decades ago when some bus companies in the UK were privatized, and they cut out the "unprofitable" feeder routes.

Guess what? More people in cars, and those people didn't just park and take the bus when they got to the main route, either.


>No, it benefits car manufacturers and sellers, and mechanics and gas stations.

Everybody thinks they're customers when they buy a car, but they're really the product. These industries, and others, are the real customers


> Everybody thinks they're customers

So much so that my comment attracted downvotes.

C'est la vie.


But having a car is kind of bad. Maybe you remember when everyone smoked, and there was stuff for smokers everywhere. Sure that made it easier for smokers, but ultimately that wasn't good for them (nor anyone around them).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: