Hacker Newsnew | past | comments | ask | show | jobs | submit | more shay_ker's commentslogin

I found this blog post by rand pretty enlightening:

https://randsinrepose.com/archives/the-update-the-vent-and-t...


Rands is awesome, and “Managing Humans” is just about the best management book I own.


Is Ollama effectively a dockerized HTTP server that calls llama.cpp directly? For the exception of this newly added OpenAI API ;)


More like an easy-mode llama.cpp that does a cgo wrapping of the lib (now; before they built patched llama.cpp runners and did IPC and managed child processes) and it does a few clever things to auto figure out layer splits (if you have meager GPU VRAM). The easy mode is that it will auto-load whatever model you'd like per request. They also implement docker-like layers for their representation of a model allowing you to overlay parameters of configuration and tag it. So far, it has been trivial to mix and match different models (or even the same models just with different parameters) for different tasks within the same application.


Currently going through Andrew Lo's MIT Sloan course, Finance Theory 1, for free:

https://ocw.mit.edu/courses/15-401-finance-theory-i-fall-200...

The lectures are well structured. Class questions are good.

And best of all... these lectures were done during the 2008 financial crisis, so you see how people react in real time. Fascinating.


You make a great suggestion. Not that B-schools get everything right or that they're on the cutting edge but, to the degree curricula and lectures are online, I'd probably lean that way more than bestselling business titles. (Which, truth be told, often have a chapter or two that distills down a lot of the key content. In some cases, they're worth the lengthier read to reinforce the key content but often they're not.)


I find lectures much better than books. A lot more engaging and good lecturers are also good storytellers.

Especially if you watch lectures at 2x the speed, you can save A LOT of time. And I'd bet your understanding is better, since lecturers can think through problems out loud. Indispensable, IMO.


I sort of hate doing the 2x thing but I agree with your broader point. My good lecturers were, as you say, good storytellers. Including the guy who won the (I know not really before some pedant corrects me) Nobel Prize for behavioral economics.


What's an ELI5 of Rama? I found the docs confusing as well: https://redplanetlabs.com/docs/~/index.html

Please no buzzwords like "paradigm shift" or "platform". If diagrams are necessary, I'd love to read a post that explains clearer.


It's a backend development platform that can handle all the data ingestion, processing, indexing, and querying needs of an application, at any scale. Rather than construct your backend using a hodgepodge of databases, processing systems, queues, and schedulers, you can do everything within Rama within a single platform.

Rama runs as a cluster, and any number of applications (called "modules") are deployed onto that cluster. Deep and detailed telemetry is also built-in.

The programming model of Rama is event sourcing plus materialized views. When building a Rama application, you materialize as many indexes as you need as whatever shapes you need (different combinations of durable data structures). Indexes are materialized using a distributed dataflow API.

Since Rama is so different than anything that's existed before, that's about as good of a high-level explanation as I can do. The best resource for learning the basics is rama-demo-gallery, which contains short, end-to-end, thoroughly commented examples of applying Rama towards very different use cases (all completely scalable and fault-tolerant): https://github.com/redplanetlabs/rama-demo-gallery


What do you mean by "platform"? Is this open source? Can I run everything locally?

Is this basically an RBDMS and Kafka in one? Can I use SQL?

I understand the handwaving around programming semantics, but I'd like clearer explanations of what it actually is and how it works. Is this a big old Java app? Do you have ACID transactions? How do you handle fault tolerance?

It may be early, but I believe folks will be curious about benchmarks. And maybe, someday, Jepsen testing.


Those questions are all answered in the documentation, which we spent a ton of time on. Some available resources:

- Public build that you can download and run yourself locally https://redplanetlabs.com/docs/~/downloads-maven-local-dev.h... - rama-demo-gallery, containing short, thoroughly commented examples in both Java and Clojure https://github.com/redplanetlabs/rama-demo-gallery - Gentle six-part tutorial introducing the concepts and API, including how to run stuff locally https://redplanetlabs.com/docs/~/tutorial1.html - Introduction to the first-class Clojure API https://blog.redplanetlabs.com/2023/10/11/introducing-ramas-...

Here are a few pages related to fault-tolerance:

- https://redplanetlabs.com/docs/~/replication.html - https://redplanetlabs.com/docs/~/microbatch.html#_operation_... - https://redplanetlabs.com/docs/~/stream.html#_fault_toleranc...


Can you please elaborate more on the open source aspect of this? Will it be an industry revolutionizing, open-source project like containerd (Docker) that every little developer and garage-dev can built upon or will it be benefiting only the big tech corporate world that controls and benefits from power and might which will be able to pay for this?

Especially since you chose to use the name Rama, I am wondering whether this will be for the benefit of all, or only for the benefit of the few who already control more than a fair share of power(finances)?


I like this description. Most one point one i've seen in the thread and you doc. So its not really a tool to use, but more of framework to follow. Wouldn't be the first framework to provide tools / setup processes and workflows in a better than ever tradeoff of features/complexity/skill floor/etc.

But yeah, quite a lot of hype and red flags. My favorite from the website: "Rama is programmed entirely with a Java API – no custom languages or DSLs." And when you look at the example BankTransferModule.java: > .ifTrue("isSuccess", Block.localTransform("$$funds", Path.key("toUserId").nullToVal(0).term(Ops.PLUS, "*amt")))

Yeah, it's probably fair to call that a DSL, even if entirly java.

Anyway, hope to get the chance to work with event based systems one day and who knows, maybe it will be Rama.


I consider a DSL something that has its own lexer/parser, like SQL. Since Rama's dataflow API is just Java (there's also a Clojure API, btw), that means you never leave the realm of a general purpose programming language. So you can do things higher-order things like generate dataflow code dynamically, factor reusable code into normal Java functions, and so on. And all these are done without the complexity and risks of doing so by generating strings for a separate DSL, like you get when generating SQL (e.g. injection attacks).


> different than anything that's existed before

not quite, through more like anything which is widely known

I have worked (a small bit) on very similar (proprietary non public internal) systems before ~5 years ago and when doing so have read block-posts about the experience some people had with similar (also proprietary internal) systems which at that point where multiple years old ...

I guess what is new is that it's something you can "just use" ;=)


> a backend development platform that can handle all the data ingestion, processing, indexing, and querying needs of an application, at any scale

That's… a database…

I mean, seriously, how is this not a database?


yesn't to some degree it's the round trip back to "let's put a ton of application logic into our databases and then you mainly only need the database" times.

Just with a lot of modern technology around scaling, logging etc. which hopefully (I haven't used it yet) eliminates all the (many many) issues this approach had in the past.


By my reading, it's a variant of the "Kappa architecture" (aka "event sourcing").

You have a "Depot", which is an append-only log of events, and then build arbitrary views on top of it, which they call "P-States". The Rama software promises low-latency updates of these views. Applications built on this would query the views, and submit new events/commands to the Depot.


It seems like an event sourcing database. Basically, instead of writing you write a message, then you can make read-only tables that update based on those messages. People do this today in certain domains but it is definitely more complicated than traditional databases.


More complicated in what ways specifically? I think the relevant thing is wether building an app with Rama is more or less complicated. Rama may be more complicated than mysql in implementation, but that doesn't affect me as a developer if it makes my job easier overall.


Discussing levels of complexity quickly gets pretty subjective. It is possible that Rama has found good abstractions that hide a lot of the complexity. It is also possible that taking on more complexity in this area saves you from other sorts of complexity you may encounter elsewhere in your application.

However, there is just more going on in an event sourcing model. Instead of saving data to a location and retrieving it from that location you save data to one location, read it from another location, and you need to implement some sort of linker between the two (or more).

This also comes down to my personal subjective experience. I actually really like event sourcing but I have worked on teams with these systems and I have found that the majority of people find them much harder to reason about than traditional databases.


There can be a lot of integration pain when implementing event sourcing and materialized views by combining individual tools together. However, these are all integrated in Rama, so there's nothing you have to glue yourself as a developer. For example, using the Clojure API here's how you declare a depot (an event log):

(declare-depot setup *my-events (hash-by :user-id))

That's it, and you can make as many of those as you want. And here's how a topology (a streaming computation that materializes indexes based on depots) subscribes to that depot:

(source> my-events :> *data)

If you want to subscribe to more depots in the topology, then it's just another source> call.

That these are integrated and colocated also means the performance is excellent.


This is what has me excited about Rama. I was very into the idea of event sourcing until I realized how painful it would be to make all the tooling needed.


I don't care about HN rules, this is an astroturfing account. Don't believe anything it posts about RAMA.


Examples should be in: PHP, NodeJS/Typescript, Python.

Seeing Closure ironically makes me think of Twitter though.


well it's either going to be Java or Clojure with this framework so those examples would be kind of pointless.


So most apps… are pointless to use with this tool? That’s definitely a reason to stick with Postgres.


I don't think you understand what this tool actually is. It's not just a database.


Seems like another attempt at No-SQL. "But this time it's different!"


That's how things typically take off, not on the first attempt. Depends on what's different this time.

(Though NoSQL has outlived its usefulness as a concept IMO, it's just too loose to be useful beyond its early use for "something like CouchDB/Mongo", which this is clearly not)


This is completely different from no-sql. Much more than just a different database..


How exactly is it different from No-SQL? No schema? Check. No consistency? Check. (eventually consistent) Key-value store? Check. (because it's using ZooKeeper under the hood) Promising amazing results and freeing you from the chains of the SQL? CHECK!


Everything you wrote here is false, with the exception of Rama not using SQL. Rama has strong schemas, is strongly consistent, and is not limited to key/value (PStates can be any data structure combination). Zookeeper is used only for cluster metadata and is not involved with user storage/processing in any way.


Negative SQL?


I have a very naive question. Why does Python have a GIL? Is it because of the language design, or because of its implementation?

If it’s due to the language design, how will Mojo avoid a GIL, given its goal is to be a superset of Python?


> Is it because of the language design, or because of its implementation?

Bit of both? The language expects properties that lend themselves to having a GIL (i.e. attempts at removing it from CPython have turned out to make it slower in many cases), but it's not impossible for an implementation that does more advanced analysis to be able to figure out cases where it isn't needed.

Code written to massively parallelize will want to/have to keep accesses inside the thread context anyways, and thus won't hit cases the GIL serves, and if you allow language extensions they can make that explicit where needed.


Can anyone elucidate why code written in a functional style is harder to debug?


Congrats on the launch, this is fantastic. I love seeing these releases. They bring me so much joy & excitement.

Can you provide some thoughts on the benefits of doing ML on Elixir vs. Python? Is the benefit the language semantics? Is it much easier to get distributed work in Elixir ML vs Python ML? Are the tools better/smoother? Are there ML ops improvements? Perhaps there’s a blog post I missed :)


I have written a bit about potential benefits in the initial announcement of Nx [0] and then on its first release [1].

In a nutshell, there has been trends in Python (such as JAX and Thinc.ai) that argue functional programming can provide better abstractions and more composable building blocks for deep learning libraries. And I believe Elixir, as a functional language with Lisp-style macros, is in an excellent position to exploit that - as seen in "numerical definitions" which compile a subset of Elixir to the CPU/GPU.

I also think the Erlang VM, with its distribution and network capabilities, can provide exciting developments in the realm of federated and distributed learning. We aren't exploring those aspects yet but we are getting closer to having the foundation to do so.

Regarding ML ops, I believe one main advantage is explained in this video announcement. When deploying a ML model with Nx, you can embed the model within your applications: you don't need a 3rd-party service because we batch and route requests from multiple cores and multiple nodes withing Erlang/Elixir. This can be specially beneficial for projects like Nerves [5] and we will see how it evolves in the long term (as we _just_ announced it).

Finally, one of the benefits on starting from scratch after Python has paved the way is that we can learn from its ecosystem and provide a unified experience. You can think of Nx as Numpy+JAX+TFServing all in one place and we hope that doing so streamlines the developer experience. This also means libraries like Scholar [2] (which aims to serve a similar role as SciPy) and Meow [3] (for Genetic Algorithms) get to use the same abstractions and compile to the CPU/GPU. The latter can show an order of magnitude improvement over other currently used frameworks [4].

[0]: https://dashbit.co/blog/nx-numerical-elixir-is-now-publicly-... [1]: https://dashbit.co/blog/elixir-and-machine-learning-nx-v0.1 [2]: https://github.com/elixir-nx/scholar/ [3]: https://github.com/jonatanklosko/meow [4]: https://dl.acm.org/doi/10.1145/3512290.3528753 [5]: https://www.nerves-project.org/


What will be interesting (and exciting) is seeing how GPU nodes are incorporated into platforms like fly.io, to really make this “elixir for ML/AI” thing seamless.


Definitely. Running ML/AI nodes close to your users on the edge and collocated with your app can be an exciting combo!


AI that is specifically learning from your behaviour with a (partial) model per user I suspect is both terrifying and magical.


I'm used to having a Python middleware in my Go web application just for ML, The idea of achieving the same with just one monolith application using Elixir is very tempting.

Has anyone switched to Elixir from Go for writing web apps? What has been your experience like?


You will not regret.


Please feel free to elaborate :)


i switched from JS to Elixir. there are many benefits you can google yourself, but one thing i've realized for myself is that in regards to webapps LiveView is a real innovation. now PHP and others are copying what Elixir is doing, which is great, but it seems like Jose and others in the space are keeping Elixir on the frontline of innovation for a long time still. time and time again (like with the ML stuff) it amazes me how much Elixir is actually inventing new stuff and thinking outside of the "just a language" box. i almost quit programming honestly, because i just couldn't take the needless complexity of everyday life and Elixir saved my career. now i get shit done.


This is more than just naming things. It’s also that Ruby allows you to monkeypatch _any_ method, even if it’s a Kernel method or already defined by a library or framework (like Rails).

It’s extremely powerful, which is great! But I want a linter that tells me, “Hey - you’re monkeypatching that. I’m going to fail this build unless you explicitly indicate you know what you’re doing”.

I don’t know of a convincing linter that has a complete solution to this. Feel free to chime in if you know of one! I’m guessing this is possible in Rubinius, but not sure about the standard Ruby VM.


This is way off-topic, but . . .

You can create a Rubocop check that will complain abuot method_added and method_undefined. And to show that you know that you're calling them even though they're forbidden, just make a comment to tell Ruboocop to ignore. And make sure commits are code-reviewed, of course.

What more could you want?


This is precisely why climate change policy is so difficult to enact. Folks don’t understand the urgency until they experience it themselves, and by that point it’s gotten really bad.

There’s an analogy with tech debt or old software here somewhere.


I think it's with everything that doesn't have an immediate, graspable impact. Nobody would smoke if cigarettes killed you after a few months with a 50% chance. If they increase the likelihood of a stroke or other complications decades down the line, it's much easier to brush it off, tell yourself you're more of a Helmut Schmidt kind of person. Or just think it's a worthy tradeoff for the benefits you get from smoking today.

Same thing with child labor regarding smartphones, clothes, you name it. It's far away. If you had to buy it right at the factory at a counter where you could see the working conditions, it would have a vastly different impact on you.

And I'm not claiming to be smarter or superior to the average Joe here. This pattern strikes all the time, for everyone.


Smoking is probably a good analogy. In all ways. Because it took decades before we reduced the impact once we knew the dangers. And centuries before we even realized the dangers.

And smoking a cigarette won't kill you. Smoking one cigarette a day won't kill you. And most people who smoke don't actually get lung cancer.

But it all catches up with you. Smoking a cigarette a day for a decade is going to cause you to die earlier than if you hadn't. Smoking more, even earlier. Most is not all. Because most people who have lung cancer are smokers. And lung cancer isn't even the only thing. There's emphysema, heart disease, etc that's all related to smoking. And way more likely. But that's all aggregate.

Climate is a lot like that. It's nothing in isolation, it's everything in aggregate.


And even now that the dangers are widely and indisputably known we still have a hard time passing regulations to curtail the behavior because of addiction and entrenched profit motives.


So totally not a climate skeptic, but part of the reason for this is because models are often wrong, and the more complex the system, the more likely it is to be off I think.

So to be fair to humans, skepticism is often rational, in the sense that science of complex systems can be off.

The part I have not totally understood is that there are good reasons to be more energy efficient and ecologically sensitive even in the absence of climate change per se.


I think the thousands of scientists who have been studying this phenomenon for the last 40 years have a much better picture than just about every skeptic that has muddied the waters with their hasty rhetoric.

If anything scientists have been abundantly cautious with their messaging. Many predictions made in early IPCCC reports were in many cases too lenient. Feedback systems, impacts, and rate of warming have been happening on track or faster than reported. I suspect many knew but they didn’t want to be labelled as alarmists.


> If anything scientists have been abundantly cautious with their messaging

And then new's outlets take that cautious wording and turn it into extremely alarming headlines. It's exhausting.


And yet the top of this thread begins with the claim that it's hard to alarm people (my words) appropriately enough to act. Ironic.


Assume the models are wrong: Why is that reason to believe that things will be better than the model predicts instead of worse?


Yes humans individually are absolutely horrible at long term thinking, it’s just part of our nature.

I wonder if we’re evolving to get better at that, if ever so slightly


It is all about tangibility. That is why we install car reverse parking sensors. If people had glasses that see air pollution, they would revolt. If people had access to a very accurate live and high-res computer simulation of climate-change, or anything, they would take it more seriously. People are spoiled with regard to the level of accuracy and tangibility they require to be convinced.


> If people had glasses that see air pollution, they would revolt.

I've ruined a couple of people's perspectives by sharing the the reason LA sunsets are so beautiful is because of the pollution particulate.


Heh, a small oil film on the water is beautiful as well. Or if chemicals give water a nice green or red shade. There are bright sides to everything.


> It is all about tangibility.

What if every weather app showed, next to actual temperature, what the temperature is modelled to be if climate change had been avoided (kept CO2 ppm to 1950s levels, say)?


In a german podcast a guy once said: "You won't get them with melting Icebergs and Polarbears. They're too far away".

It's true, nobody* cares about icebergs and other things they've never seen before. Just wait when the drought kicks in and more and more Problems arise. I hope then people start acting themselves instead of shouting into the social platform nirvana.


I feel like we need movies/media that helps things feel more real, personal, negative, close to home.

Not movies that _focus_ and sensationalize climate disasters like The Day After Tomorrow. But movies that exist in the near future where really visceral elements of how climate change played out 10-20 years exist as a backdrop to whatever story is being told (but feel grounded in reality).

e.g. water rationing, abandoned towns/cities, authoritarian responses to increased immigration/migration, food shortages, etc...


I'd say 'Don't Look Up', but on second thought, the venn diagram of climate change denialists and people who don't realize the movie's a satire about climate change is almost a circle.


Children of Men feels like a very realistic dystopia of the UK five years from now.


Children of Men doesn't specify the cause for the infertility, I think? Only some handwavy "there were some chemicals", IIRC? At least I'm pretty certain it doesn't attribute anything to climate change.


Russell T Davies's Years & Years feels like a plausible near future. Bananas are extinct, baking systems failing, refugee crises.


> e.g. water rationing, abandoned towns/cities, authoritarian responses to increased immigration/migration, food shortages, etc...

More like Mad Max?


Ya the original is a good example but that's probably still too apocalyptic, one can watch that and scoff saying "it'd never get that bad".

Stuff that's more focused on the immediate, painful changes but still in line of sight from our current reality. Children of Men did a pretty good job pulling some subtle "here's how society has changed for the worse" world building in (obviously all based on it's underlying premise that no more children are being born)


> More like Mad Max?

More like California, Arizona, Utah, South Africa, etc.?


> "people start acting themselves "

There are only 3 'individual' actions avaliable that have any real impact: Voting, Stop eating meat, and pitchforks / civil unrest.

I predict we will reach phase 3 very suddenly and then there will be all these talking head on TV wondering 'oh my god, how did this happen?'.


Of these, voting has the only real impact, because no action an individual can take (safe for suicide) can make their life carbon neutral. That needs policy.


What about saving energy and not wasting water?


Being a vegan protester is clearly the only way to have any impact


if you live in an apartment, don't own a car and you main use of water is shower.

There are no real energy savings avaliable without cutting back on hygene


[flagged]


This reads like you're looking for a fight by using the least charitable interpretation of the quote.


Yeah, it's only really "us vs them" in the sense that "they" don't seem to realize they're also really "us."


Obama would say "we"


I say "folks" as a 1:1 synonym for "people" and only recently became aware that some folks/people find this in some way derogatory. I think it's a regional thing.


I’ve pretty much been using “folks” as a gender neutral “guys” because I find “people” potentially problematic (some constructions like “you folks” or “you all” are pretty much always casual while “you people” can sound charged, etc). I think this is common? Haven’t heard of people taking offense to “folks” before.


Polar bears don't work on me, because the provided polar bear population numbers are higher than previous years!

The question for me, is why do people believe that there is a problem? Is it that you just have to state polar bears are in trouble? Does anyone check the claims of the climate alarmists?

You should take a look at Al Gore's film again, and see how well that has aged.

Climate alarmists need to answer the claim that they are just boys who cry wolf, imo.


I don’t agree that the data support your claim. This link seems like a good summary: https://www.abc.net.au/news/2021-10-27/fact-check-gina-rineh...


I also apportion a fair amount of blame to the media.

There is a major 'boy who cried wolf' effect--because literally everything is maximally exaggerated and sensationalized to optimize engagement and revenue, people are numbed to the constant alarmism and there is no way to get through to them and convince them that this time the crisis is a real one that they can't afford to ignore.


This is one our of biggest flaws - Being reactive instead of proactive. It doesn't always work.


Agree, and a big part of the problem is that Politicians will not be proactive and spend money for no apparent visible return. If they're wasting money they're unlikely to get re-elected. It's the same as asking IT accountants to invest in mirroring systems that appear never to fail - until they do...


Exactly. For a simple example, look at the criticism about Lithuania’s LNG terminal[1] a few years ago, built to reduce their dependency on Russian gas.

I think they are pretty happy about that terminal now.

[1] https://www.lrt.lt/en/news-in-english/19/1111346/five-years-...


I actually wonder if humans lack a crucial adaptive advantage if they do not intuitively understand how systems work. But then it occurs to me that some ancient philosophies and religions emphasised the need to be in tune with the surrounding world.


We don't understand complex systems well at all.

I think the more defective part of humans is our near complete inability for long-term thinking and planning, especially collective long-term thinking and planning. Just look at our daily lives and jobs. When are long-term plans every truly engaged and acted upon? Almost none. There is much too much self-induced noise in society and the economy, and there's a hyper-focus on short-term results and concerns.


I am thinking more of an automatic ability to see how things are related. Chinese language(s) ↔ Taoism, in the context of a holistic approach to worldview [0]. I know I am exaggerating, but maybe some meditative training could help in this regard?

[0] https://www.nytimes.com/2008/03/04/health/04iht-6sncult.1.10...


How would understanding systems have been significantly adaptive for humans before a few thousand years ago? I agree with you that humans generally lack this capability. We also lack the ability to understand exponential growth, which I think is partially a cause of our lack of ability to comprehend systems.

My personal theory on this is that it comes from the fact that our sensory systems operate on a logarithmic response curve [0]. Note, for instance, how the decibel scale for measuring sound intensity is a logarithmic scale. Because our sensory systems respond logarithmically, that means an exponential increase in stimulus feels linear, at least until the point where the stimulus is damaging or so intense as to be uncomfortable. The end result is that we think "it's not so bad" until it's really bad.

---

[0]: https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner_law


I find your perspective interesting, but have the feeling that you are not thinking in systems (!). The sensory systems are perceptual systems, but they are subsystems of a larger "cognitive" system, and we cannot be sure that it exhibits the same logarithmic response behavior.


I think you hit the nail on the head.

The Bible says: “It does not belong to man who is walking even to direct his step.” —Jeremiah 10:23.

According to this, humans were not created to rule themselves.

We do a poor job of governing other people and solving global problems.

To me, it's just clear at this point that this is our core problem.

Source: https://wol.jw.org/en/wol/d/r1/lp-e/102019005#h=15:0-16:0


> The Bible says: “It does not belong to man who is walking even to direct his step.” —Jeremiah 10:23.

It doesn't matter what the bible says. You can thump it all you want. It won't reduce climate change, excuse inaction, or absolve religious fatalism.


No, the reason is that people pretend that their favourite freedom-restricting policy is “against climate change”.

Ban meet. Ban non-private flights. Turn off hot water. Ban nuclear. Ban bitcoin. Ban aircon.

All fake, authoritarian “solutions”.

The real solution is simple: tax fossil fuels. That’s literally all that you need to do to solve climate change? And also the only thing you can do.


I live in the country, and a lot of farmers and even workers who have to commute would probably be hit quite hard by any spikes in fuel prices. We had already felt it when the prices were going up recently. It doesn't help that a lot of jobs that can be remote still aernt or still don't even give the option to be. We also aernt feasibly able to switch to electric at this point, as I can't tell you any nearby gas stations that allow for electric car charging.


Tax fossil fuels? When I fill my car with benzine at 2 euro/liter would you like to guess how much of that is taxes? Same for the gas heating in my apartment. And the electricity that powers my PC. The cost of the actual fuel or energy is only a fraction of the price I pay, the rest is taxes (plain old taxes, or taxes masquerading as operating / distribution costs).


I didn't read tomp's post as saying that taxes weren't already being assessed. I'm pretty sure tomp was arguing that the taxes need to be _higher_.


Nobody does anything until there are bodies on the ground.


Bodies that look like their own, too often.


It doesn't help that everyone's constantly being lied to by sensationalized hyperbolic news from the mainstream media.

We've been conditioned to assume everything's bullshit until it's objective reality on display before our own eyes.


I think this is the case for many of the catastrophic scenarios that a human/humans can face. Think about people smoking, or eating really really poorly, or driving drunk. And the worst thing is that in many cases people will revert to the old behavior provided they survive.


I wonder whether frogs have a tale about how humans behave on a warming up planet.


I sometimes wonder if we released civilisation game where climate change is a harsh unforgiving price for using coal and oil, would it change anything? Is it too late for such a product?


It's been a big feature in the later games. But it's been there since the start. Civ 1 had a simple mechanism where highly industrialized cities would create polluted tiles. You could clean up the polluted tiles with settlers. But if your settlers didn't clean it up quickly enough, there would be consequences. Plains tiles would turn into desert. Coastal tiles would turn into swamps.



In all the civ games it's always been manageable relatively easily


It was also in civ2 as a base feature.


[flagged]


Are you trolling? It just requires on the order of 1x GDP to install enough renewables and storage to start reversing climate change.

It is totally doable, but people are just trying to skirt the costs of climate neutrality.


And what storage mechanism are you imaging? People fail to realize that only ~300 GWh of batteries are produced each year, as compared to 60TWh daily electricity use (and about twice that much in terms of total energy use). Even attempt to install just one hour of storage capacity would require several times more storage than is produced globally.

Any serious attempt at producing grid storage would lead to shortages and increases in prices. This is why plans for a renewable grid assume that some heretofore unused storage mechanism - like hydrogen storage, compressed air, or giant flywheels - will make energy storage nearly free. Because existing storage mechanisms can't be produced at scale.


He's not. Zero emissions today, while impossible, would not remove gases from the atmosphere in a meaningful way, also something we cannot presently accomplish.

Past emissions will stay for centuries and increase heating globally for centuries to come.

All the reports stop at 2050 to 2100, but none of them have any sort of peak temperature in sight.

The extremes of today are only the very beginning.


if by storage you mean batteries, are you sure that their production/usage/recycling makes up for the environmental cost of producing/recycling them?

I am all in for green energy but we have to be honest and estimate well, otherwise we will simply continue as we are now (if not make matters worst)


A common trope is "this replacement (nuclear/wind/solar) isn't perfect so lets keep building fossil fuels"

Every 1kwh produced by a windmill is 1kwh less of oil being burnt. We don't exactly have an abundance of energy at the moment, there's no excuse not to be diverting vast amounts of planetary resources into renewable production.


I am certain that is the case for batteries. Recycling is always easier than digging up and processing the rocks which contain a few percent by weight of each relevant element, and they're already a net win (with regards to CO2) even if you do that.


> Recycling is always easier than digging up

Easier? Most of stuff that is supposed to be recycled ends up in landfills.


reversing what? I just mentioned the CO2 is already out there. How do you put the genie back in the bottle?

> trolling

Please don't call others for your own behavior.


We can do plenty to stop it getting worse (and in fact are).

There are also plenty of ways to take CO2 out of the air (several of which literally grow on trees, or are trees), the question for both organic and technological CO2 sequestration is economics.

For a sense of scale, human emissions are about 40 Gt CO2, global primary production is 104.9 Pg, so making the world about 10% more fertile would have the same effect as decarbonising the economy, or equivalently remove 1 year of existing excess carbon if we also decarbonised the economy: http://www.wolframalpha.com/input/?i=%2840%20Gt%20%2F%20mass...

(10% is a lot, but not so much it would be crazy to consider).


> There are also plenty of ways to take CO2 out of the air (several of which literally grow on trees, or are trees), the question for both organic and technological CO2 sequestration is economics.

As you mentioned none of them are currently economical. You can't drive such a thing if it does not make sense financially.


And I will never respect your policy for you lack the self-awareness required to acknowledge its risks, and they are enormous, like stalin era population displacement high.

I won't go on Nazino


If I’m understanding correctly, this switches your profile on some interval between A and B, whereas a proper A/B test will randomly bucket a user to experiment A or B.

Not that it matters - this solution is probably the right way to go without building something into Twitter itself - but the more data/stats oriented folks may be confused or irked by calling this “A/B testing”


That is correct - it switched your profile version at a regular interval.

Indeed, that's the only way I could do it. The Twitter API has its limitations :D

I didn't know there was a specific definition of A/B testing. I'll see if I get more complaints about the terms I use ^^.

To me, that's still A/B testing - that is I'm testing a version A and a version B and then report on which one does better. I guess the way I'm doing it is different :D


Definitely agree that this is a good solution given the limitations, I think the only downside to this approach is that there might be some effect based on time of day or the interval affecting your results. That said I think the chances of that are super low and A/B testing is only so accurate anyway. Great idea and nice website!


Thank you!

> I think the only downside to this approach is that there might be some effect based on time of day or the interval affecting your results.

That is definitely true. I'm about to start alternating the versions every 5m to mitigate this. The closer I can get to 0m, the more accurate the results are. This way even if you get a followers spike (let's say you get a viral tweet), the followers will be properly distributed between each version.


I think what you really want to do is randomize A or B within each 30m (or 5m) interval. This is basically switchback testing. Door dash has a nice write-up here: https://doordash.engineering/2018/02/13/switchback-tests-and...


Yep - we've run many switchback tests so I'm happy to chat more about it. It's a lot more akin to what you're building here, from a stats point of view.


Oh right, that's actually a better technique! Thanks for the idea.


Then you get the opposite problem: users are more likely to see multiple versions of the same profile.


I feel like that's less of a problem because the user is more likely to convert on the version that he likes more, which would still provide accurate data.

But yes, there is no perfect solution with the limitations of the API :D


That's cool man apologize for what is for you to apologize, and nothing else. Nothing else. And do your thing in the semblance of your vision of your ideas, all the way and show the world. And the world won't like it.

You gotta be sure of what you know. Be sure and be wrong, better than always being unsure.


Thanks Daniel, I guess xD That was quite cryptic, but I think I get it haha. Cheers!


I hear ya - just giving the data/stats perspective. It's an audience that cares a lot about these details.


Why is bucketing users the better approach? To me it seems like bucketing would ignore "all other factors" that also changed, whereas dynamic (or periodic) switching seems like it would normalize those "other factors" across both A/B (ideally).


I suppose that the issue is that a single user could see both versions, which could skew the data.

I'm not sure I totally understand why, because if a user follows you after seeing the other profile version, it might be because he preferred this other version.

But conceptually it makes sense to eliminate as many variables as you can to isolate the components of the test.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: