Search engines are websites that people used to go to in order to get helpful information. These days, they just spam out a bunch of SEO garbage, AI-generated bullshit, and ads.
Google, probably
Search engines are websites that people used to go to in order to get helpful information. These days, they just spam out a bunch of SEO garbage, AI-generated bullshit, and ads.
Google, probably
Yes, a lot of developers have done this. Many examples have been posted on this thread (OsmAnd, Conversations, Davx5) - Mindustry is another example. free on f-droid (and Google store too I think), but $10 on Steam.
I haven’t done too much work with WASM myself, but when I did, the only languages I saw recommended were Rust, C++, or TinyGo. From what I’ve heard, Rust and C++ are smoother than TinyGo. Garbage collected languages usually aren’t great choices for compiling to wasm because wasm doesn’t have any native garbage collection support. That limits your selection down a lot.
But another option you may want to consider is Nim. As I understand, it compiles to C, so any C->Wasm compiler should theoretically work for you as well. I did a quick search and wasn’t able to find any great resources on how to do this, but you might get a bit more lucky. Good luck!
You’re probably right. I think COBOL development is one of the cases where the crazier stories are the ones that bubble to the top. The regular scene is probably more mundane.
I do think there are a few advantages to learning COBOL over C++. COBOL seems to be much stickier - companies that use it seem much more hesitant to replace it than a lot of the companies that use C++, and as a result, they will probably get more desperate. And while there’s definitely a lot more C++ out there than COBOL, I have to imagine that the number of people under 50 that use COBOL is probably tiny, while C++ still has a very large userbase. On the other hand, consulting depends a lot on your portfolio, references, and past accomplishments, and nobody’s going to pay 1k EUR/USD/etc. per hour (exaggerating, obviously) if you don’t have any credentials. It takes time to build that up.
Ultimately, I do think you’re pretty spot on, but we’ll have to see. This is more just a fantasy I tell myself to make it seem like retirement is closer than it probably is…
I fantasize about being one of those extremely well-paid Cobol consultants when I reach the later stages of my career. Hoping that I can earn a full year’s salary in 3-4 months and take the rest of the time off as a semi-retirement. It would be easier said than done, but it’s a dream that helps me get through the days when I get sick of the daily grind.
This is very interesting! Things like this make me wish programmers would give functional^W declarative programming more of a chance. I’ve long fantasized about being able to write programs as declarative code that the computer can optimize automatically without human intervention. When you implement your program in more restrictive (ie. stateless) paradigms, you can more easily reason about the code, and thereby make it easier to optimize or run in different environments.
SQL is a great example of this - when you look at some of the optimizations that servers like PostgreSQL can do under the hood, this is because the language inherently limits what you can do so the actual system executing your instructions can do different things with it for better performance and reliability. Things like this are what make query optimizers possible, and it’s really fascinating if you actually read carefully what query analyzers report (beyond just checking whether your indices are being used or not).
Beautiful chart. Thanks for sharing!
This is quite cool. I always find it interesting to see how optimization algorithms play games and to see how their habits can change how we would approach the game.
I notice that the AI does some unnatural moves. Humans would usually try to find the safest area on the screen and leave generous amounts of space in their dodges, whereas the AI here seems happy to make minimal motions and cut dodges as closely as possible.
I also wonder if the AI has any concept of time or ability to predict the future. If not, I imagine it could get cornered easily if it dodges into an area where all of its escape routes are about to get closed off.
Git’s internals are very easy to understand and once you know more about them, you’ll have a much better idea of how it works (especially when it comes to tags and branches). They’re so simple, you could even easily write your own scripts to parse git’s internal data directory if you wanted to.
I would highly recommend reading about them: https://git-scm.com/book/en/v2/Git-Internals-Git-Objects#
there has to have been a significant global productivity cost due to the lack of a better UI.
I’m not so sure about this to be honest. If it were really that big of a problem, someone would have made an effort to resolve it. The fact that people still use it anyway suggests to me that it’s a bit of an overblown issue.
Xournal - a great way to draw on pdfs
Agreed on all points. I think some of the issues that you’re facing are things that would be resolved if Ocaml were more popular. But some others would be harder to fix without making breaking changes to the language as I mentioned earlier. If I had to put it as succinctly as possible, I’d say that the language just needs a lot more polish which would probably happen if it were more mainstream. But not all languages have to be mainstream, and maybe Ocaml’s purpose in the world is, as you put it, to inspire other languages. It is definitely extremely good at that!
No one has said Ocaml yet, so I will. It’s not a perfect language, but it has a lot of cool ideas and concepts. It’s a functional language, but allows you to write imperative code when you want to. Algebraic data types and type matching are built natively into the language and work very nicely. It’s type inference capabilities are very powerful (though that can backfire at times), and the |>
operator is really, really fun to use. It also has very powerful module/functor capabilities, though they go a bit over my head since I haven’t had a chance to play with them. Also, Opam is a very powerful package manager and it’s pretty easy to wrap/bind external libraries with it.
I’d love to see some improvements to the language - the syntax is a bit confusing and ugly at times (but this unfortunately can’t be fixed without breaking the language of course) - but overall I think I’d have a lot more fun programming in Ocaml than what I do in my day job.
I think most of the arguments here are kinda ridiculous and poorly thought out. A lot of them also sound pretty imaginary and made-up. For example:
To assert that frontend languages are not programming languages is to assert that what one is doing when writing them is not programming, but something else. Something different.
Something—perhaps not explicitly spoken, but undeniably implied—lesser.
Basically, he’s arguing that everyone who thinks HTML/CSS isn’t a programming language is wrong, and then the only reason they feel this way is because of a prejudice against front-end developers. I think this is really just a wild leap in logical reasoning, personally.
(No mention of Javascript/Typescript here by the way.)
If you wanted to find the dev specialization with the most people who aren’t cishet white males, you’d pick frontend.
Do we honestly believe the language around frontend is different purely by mere coincidence?
… yes? His argument that HTML/CSS should be considered programming languages is honestly quite weak. Couldn’t that be the reason instead?
Certain pursuits are validated with importance, dignity, and honor.
Doctors; lawyers; architects; CEOs; software engineers.
… we relegate others to the role of the sidekick - even though their labor is no less important, and they do at least as much to push the work toward success.
Nurses; paralegals; interior designers; executive assistants; frontend developers.
Who the hell is making these groupings?? Front-end developers compared to nurses? Software engineers to doctors? And software engineers being held in the same light as CEOs… wtf???
(Surely it’s a coincidence the first group tends to be more male than the second.)
Once again, he’s attributing his feelings with prejudice when really, I think his arguments are just very poorly thought out.
Other forms of development are generally considered serious work. They’re important. They’re real computer science. (Computer science itself being a higher level of things we’ve decided are real, serious, and important—maybe not quite as much as medicine or law, but then again, maybe so in some circles.)
Again, I don’t see anyone arguing or claiming this. I’m sure the author would argue that just because we don’t say it aloud, but it’s just implied, but I honestly just think no one says it because it’s just silly.
Writing CSS seems to be regarded much like taking notes in a meeting, complete with the implicit sexism and devaluation of the note taker’s importance in the room.
Though critical to the project, frontend work will quite often be disregarded by those who consider it beneath them (usually men, and usually only tacitly, never explicitly). It’s not serious enough; not important enough; not real enough. Too squishy. Like soft skills.
Once again, just unfounded accusations of bias. “You didn’t say it, but I’m telling you that you said it anyway.”
Their [software engineers’] output is easily measurable. A new API feature; a more efficient database; crises averted and crashes prevented. They go on charts and get presented to board members.
Board members couldn’t give less of a shit regarding what software engineers do. We’re considered a cost that they’d love to get rid of as much as any other position. Look at all the AI hysteria going on right now, like Nvidia’s CEO telling people not to go into software because it won’t exist anymore. Again, I have no clue where this guy is getting his ideas from.
If our job title does include the word “engineer,” it will almost certainly specify what we’re engineering. It’ll be UI engineer, or frontend engineer, or maybe the newer (and arguably more fitting) design engineer.
But it’s probably not “software developer” or “software engineer” without any other qualification. Because that, tacitly, is not what we do.
Completely disagree. Front-end development is a subset of software engineering. He even admits this as much:
Sure, this is nuance of language and these titles serve to disambiguate. I get that.
but then he goes on to dismiss that by saying “that’s not really it though, it’s really because we’re not considered real engineers”:
by definition, somehow what we do isn’t seen as software engineering. It’s different than that. It’s softer than that.
By what definition exactly? He just explained the reason for the difference in terms above, but then goes on to say that’s not really it - the real reason for everything bad is (what he perceives as) negative bias.
There’s a couple interesting ideas in here. He makes a good point that layoffs on the front-end are more likely to hit underrepresented classes, though there’s not much that can be done about that. Layoffs are happening everywhere, and DEI is probably not what’s on CEOs’ minds when they make those decisions. And sure, there are unrealistic expectations at times, but that happens everywhere, not just in the software industry, but in pretty much any labor scenario.
But overall, I think this guy has major issues with his self-perception. Pretty much all of his arguments are predicated on very poorly thought-out or straight up imaginary ideas. And blaming everything that’s wrong with his perception of front-end development on the white male hierarchy is just… I can barely even find words for it… nonplussing? I think he figured it out by the end of the article:
Maybe I’m feeling sorry for myself. Maybe I’m just a little depressed right now. Maybe I have an inferiority complex and I’m projecting it on everyone else.
I’m pretty sure it’s all of those.
I wish this guy the best. Shit is hard right now. But I’d be a fool to say that I agreed with more than 10% of what he’s trying to argue.
My advice would be to learn C first (or at least develop a good understanding of it). It’s extremely important to understand how memory works in C so that you can understand pointers in C++; and also important to understand how functions work so you can understand classes and methods in C++. I would go through The C Programming Language. It’s fairly concise and while you don’t have to go through it cover to cover, you should at least understand the chapters on structs, pointers and functions (up to chapter 6, I believe).
(Note that the wikipedia link that I posted above has a link to the full text of the book in pdf format.)
The reason why I think it’s important to understand C is because when you learn C++, then you’ll understand how the language abstracts over a lot of the lower-level functionality in C. new
in C++ supplants malloc
in C for example, and your understanding of functions in C will map to more complicated concepts like constructors, destructors, copies, methods, and operators in C++. At this point, I would probably start learning how classes in C++ work. They’re basically structs with private member variables and methods defined in the scope of the class. learncpp.com, is the best reference that I’m aware of (it’s very thorough, which makes for a pretty slow read, but you’ll understand it very well). I would probably start with chapter 14 (introduction to classes), and then go back to the earlier chapters to fill in the gaps, but this is more dependent on how you think you learn best.
Be aware though, that if you don’t have existing experience with OO development, then C++ is (in my opinion) not a great language to start learning it, because a lot of it is hacked on top of C and implemented in arcane ways in order to maintain compatibility with C. The first language I learned was Java, and it was really helpful to have that as a background for when I learned C/C++. I’m only familiar with Javascript on a procedural programming level, so I’m not aware of its OO functionality or how well that will translate to C++, but hopefully it works out.
Good luck!
Agreed overall, you will still be competent switching from one language to another, but intricacies and nuance matter a lot here. You may have enough knowledge to solve problems, but will you have enough knowledge to avoid creating new ones too? Like performance issues, or memory leaks, or other unwanted behavior? C++ is a great example here: someone that’s smart but inexperienced might just be dangerous enough to start writing classes with dumb pointers without overriding the copy constructors, and this is just a recipe for disaster.
I think it would take more than a few months to develop the kinds of experience that you need to be aware of these issues and avoid them. And while C++ is a very easy example to point out here, pretty much all languages have their share of footguns to be aware of, and it just takes time to learn them. A “deep knowledge” of a language is not just about being faster and more productive; it’s also about not creating more issues than the ones your solving.
This one might be a bit controversial, but has rung true in my general experience. Probably a lot of exceptions to these rules, but here goes:
You don’t really know a programming language until you understand a fair amount of the standard library and how packages/modules/dependencies work. Syntax is pretty easy, and any mainstream language will work just fine for solving basic leet-code style problems. But when you really spend a lot of time working with a language, you’re going to spend more time learning about common libraries and how to manage dependencies. If you’re working with a language like C++ or Java, this could also include build systems and how to use them.
Another precursor to being able to say that you know a language is that you should also be familiar with best practices (ie. how to name modules, how to write documentation, etc.) and common pitfalls (undefined behavior, etc.). This is one of the hardest parts about learning a new language in my opinion, because the language may not necessarily enforce these things, but doing them the wrong way can make your life very difficult.
the only annoyances are that to use the slash you need to use shift
Oof, that sounds really annoying. I can’t possibly imagine how I would use the terminal that way
But Swiss, that’s the stuff of nightmares!
Ha, that sounds funny (in a morbid kind of way…). What’s so bad about it?
Off-topic, but I’m curious why you would put Nim in that list. While I absolutely love the language, I’ve never heard of anyone using it for anything serious, especially compared to Rust or even Zig. I’d even be surprised if it has more mindshare than D.
(An absolute shame by the way. Nim looks like an absolutely fantastic language.)
This is the one that broke my back. Understandable that XPCOM extensions had to go, but leaving nothing to replace them, and then going on to push their trash UI redesigns without giving us any recourse to change them back - that was just unforgivable.
Then again, that was still well before they started pushing spyware in their own browser, so in retrospect, those were very quaint times!