I recently created a static site with technical documentation using Antora. Antora uses asciidoc source that it pulls from git repositories. It has a lunr extension for search.
It took some time to get familiar with the setup, but I find it very nice to work with. I use it for the documentation of a set of code libraries, and it is a really good fit for that.
A good course will teach you everything on a topic in a structured way. Personally I believe there is a lot more value in well-structured material than in a bunch of responses on rather ad-hoc questions to ChatGPT. Also be aware that not everything ChatGPT answers, is correct. In the case of a decent course, one can assume that the information is correct.
I feel this is similar to asking whether there would be value in paying for a well-written book on a topic, compared to just searching and reading information online on that topic, be it official documentation, or be it third-party information from blog posts, other articles.
In my opinion, a well-written course or book will give you a much deeper knowledge in a much more efficient way than what you get from ChatGPT or from searching the internet.
Email is one of the few protocols that was designed out of the box to be decentralized, that in the mean time is implemented by a lot of big tech players and that has stayed compatible for communication between accounts at one of those big tech players. And that is also the reason it won't disappear soon.
And aside from that, what the article mentioned is human error. And honestly, I do not think that other online communication protocols will eliminate human errors.
A lot of big words in the summary (reimagine, crafting tools, transformative functionality, smoother workflow, push beyond typical CI/CD boundaries, empower developers, ...) and still, with that summary I have no clue what exactly this thing does.
Context: I am very familiar with several JetBrains IDEs and CI/CD in general, and a bit with GitHub Actions.
Please, can someone describe in a couple of sentences what this actually does?
> There's something that seems very satisfying about loading software from tape, too.
Uhhh.. as someone who used to load software from tape on a CPC 464 ages ago, I can tell you that it was painfully slow. Nothing satisfying about it for me...
You might like my Amstrad CPC demo from 10 years ago called Breaking Baud.
It was a custom turbo loader for the CPC that had a co-operative multi-tasking thing going on, so it had a loader, music player, realtime data decompressor and some spare time left to do some other things.
I had plans to make the demo even better, but actually what got produced was quite pure because everything, literally EVERYTHING, is loaded directly into RAM and then decompressed in place (that's what the random white flashes are you might see at times - it's usually where a RLE marker was placed in RAM and shown by the display hardware before the background task to expand/copy the RLE section was executed).
As such, every step of the animations loaded could actually a full screen independent of the last, it's just most of the time it compresses super easily as the delta from what's already in the memory at that point is very small.
This is a very early prototype that actually shows the speed of loading completely unrelated images, just relying on compression - some of the screens load 16KB in around 10 seconds: https://www.youtube.com/watch?v=qfGcvWOaFFw (BUT BE WARNED IF YOU ARE SENSITIVE TO FLICKER)
I linked below in another comment a version of the last video captured on an emulator, so no flicker. Also, I set the correct palette for each picture, so it's a bit nicer to look at anyway: https://www.youtube.com/watch?v=NxX4fu6dAzo
As a software engineer, I insist on giving developers high-end laptops. The reason is very simple: a lot of development environments are very heavy to run, and developers should not waste time on their development tools running slowly. I also don't want developers to disable tools that are meant to keep an eye on the quality of the code. High-end laptops generally serve well for development for up to 5 years.
Developing on high-end laptops should definitely not be an excuse to deliver slow software, and in the teams I work in, we do pay attention to performance. You are right though, a lot of software is a lot slower than it should be and my opinion is that the reason is often developers that lack fairly basic knowledge about data structures, algorithms, databases, latency,... One could say that time pressure on the project could also play a role, but I strongly believe that lack of knowledge plays a much bigger role.
Now, aside from that, also keep in mind that users (or the product owner) become more and more demanding about what software can and should do (deservedly or not). The more a piece of software must do, the more complex the code becomes and the more difficult it becomes to keep it in a good state.
Lastly, in my humble opinion, the lowest range budget laptops are simply not worth buying, even for less demanding users. I think that most users on a low budget would be better off with a second-hand middle or high range laptop for the same price. (I am talking here about laptops that people expect to run Windows on, no experience with Chromebooks.)
> users (or the product owner) become more and more demanding
I disagree. For all my life, customers have been asking for as much as they can imagine. Customers wanted flying cars long before they wanted the latest iPhone.
The thing that changed is that we realised that if we write lower quality software that has more features (useful or not), customers buy that (because they are generally not competent to judge the quality, but they can count the features). So the goal now is to have more features.
> I think that most users on a low budget would be better off with a second-hand
Which is exactly the problem we are talking about: you are pushing for people to get newer hardware. You just say that poorer people should get the equivalent of newer hardware for the poors. But people on a budget would actually be better off if they could keep their hardware longer.
I think one should not use advanced language features just because, but I also think one should not avoid using advanced language features where it is useful.
Why would the code base be worse when advanced language features are used?
I don’t understand your comment: if you develop software as a team, it seems important to communicate and to know what the others in your team are working on?
Also, I really don’t see it as ‘social’ meeting, to me it’s a focused technical meeting about the work that is going on.
Yes to communicating with each other and knowing what you're working on, no to doing it every morning. Once a week, every two weeks, or once a month is fine. Any integral communication that can't be handled by the recurring meeting, can be handled ad-hoc as you go.
But people who have only done capital A agile and scrum are so buried in the philosophy that they don't understand that there are far better ways to do things.
I'm not sure how that's supposed works if most team members are burning through 14 tasks or 10 tasks or 5 tasks in a two week period. If your tasks are two week chunks, then you're doing something else completely different.
Over a 40 year career I've done all kinds of methodologies. If you understand that there are far better ways to do things, I'm all ears.
The big advantage to agile/scrum methodologies in my opinion: dramatically improved predictability. Total elimination of drama. Efficient management of expectations outside of the development group. Never having to do a death march ever again.
Given the rest was reasonable I was willing to grant that as "aims for high quality".
Because it wasn't about how good of a programmer they are, it was about their intentions and principles. "I try" vs "I don't try" already makes the significant relative difference regardless what the absolute values are.
When I read about people using AI to write code, it always seems to me that it would be a lot faster and less hassle if they would write the code themselves, without even considering the fact that it would give them more experience and make them better.
I have used some of those tools myself, and for the code that I could use help of an AI tool, I, again and again, receive junk: code that looks plausible but that does not compile, uses apis or libraries that do not exist and so on. In the end, it just made me waste time.
I find it useful when I'm working with some unfamiliar technology. It can get me up and running much faster, and even debug code to some extent.
I used it yesterday to fix a problem with my makefile. I couldn't see the issue, gave it to chatGPT. It gave me some code that didn't actually work for some reason, but which had the correct solution to my problem in it. I just ported over the solution to mine, job done. This was after spending a couple of fruitless hours reading documentation and blog posts.
we had a guy on our team who was using ChatGPT aggressively in his awful code, to the point of putting his prompts in as comments
he did not last long...
this is the kind of developer you have to explain (repeatedly) why things need to run on servers & service accounts rather than his laptop as his personal ID
reply