eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

221
active users

#compilers

0 posts0 participants0 posts today
now reading: Retrospective on High-Level Language Computer Architecture [Ditzel and Patterson 1980]: a summary of failed design approaches for

- reduction of the semantic gap between programming and machine languages
- reduction of software development costs
- aesthetics ("esoteric")
[High-level language computers] are aesthetically appealing to those not familiar with modern compiler writing technology. It is acknowledged that code generation may be simpler for a high-level language computer. What needs to be made more fully understood is that a high-level language instruction set does not eliminate the need for compilers, nor does it greatly simplify them. The need and complexity of compilers extends far beyond code generation. The amount of code necessary for preprocessing, lexical analysis, syntax analysis, assembly, optimization, loading, error detection, error recovery and diagnostics often dwarfs the part of the compiler concerned with code generation. The level of the target computer does not seem to have enough of an effect on the size of a compiler to warrant a totally new architecture.
ref: https://dl.acm.org/doi/pdf/10.1145/800053.801914

#compilers #computerarchitecture #forth #retrocomputing

I want to read a #compiler book written in the last 15 years that covers same topics as the Modern Compiler Implementation book by Appel, but uses recent terminology, tools and techniques. Any recommendations? #compilers #programminglanguages

EDIT: It seems like no such book exists. I guess I’ll have to read docs, blogs and papers along with old books to put things together myself.

Update on my very-very-WIP Rust compiler: I'm in the process of my third rewrite of the task queue system internals, this time as a separate crate that I'll publish on crates.io. Krabby isn't the only highly parallel, CPU-bound application where work is divided into small, synchronous tasks, so I may as well make this useful to others. I hope to get a blog post out over this week.

For my next #compiler project, I want to write the optimization passes myself, but I don't want to deal with generating machine code for multiple platforms. So tell me #programminglanguages #plt #pldev #compilers fedi, what is an IR that I can target that has a non-optimizing compiler to machine code and supports multiple platforms? This rules out most popular IR like LLVM, C, QBE, Cranelift etc.

In short, I want something that does only instruction selection, register allocation and codegen for multiple platforms. I don't need optimization, so I expect this thing to be really small and lightweight, unlike LLVM, GCC etc.

I am building gcc-15.1.0 on my iMac G4 (Tiger) machine. It is on stage2, which is a good sign.

It will include C, C++, Fortran, Modula-2, Objective C, and Objective C++ compilers.

It will depend on my new PowerPC Mac OS X modernization library, libpcc: github.com/ibara/libppc

I'll write a blog post about how to use it once it is all compiled; my goal is to produce a turnkey solution that just works(TM), including assembler, linker, and other utilities, as recent as possible for PowerPC.

And libppc can be instantly extendable to incorporate more C11 and later features. Hopefully others in the retro Mac community are interested in building that up with me.

My ultimate goal is to build some flavor of WebKit some day and have a modern web experience (even if slow, and possibly using X11). But in the meantime we will probably build a lot of excellent modern software to keep these machines going.

GitHubGitHub - ibara/libppc: Modernization effort (C11-C23) for Mac OS X PowerPCModernization effort (C11-C23) for Mac OS X PowerPC - ibara/libppc

Final update on the TOML shenanigans: it works! My prediction regarding the latency was correct, average runtime for the entire "find and load a Cargo manifest" task has dropped from 180-200µs to just 20-21µs. There's definitely some measurement overhead to tracing (which I will probably address with something custom later), but given that these times include I/O, I think I've achieved a clear 10x speedup over the cargo_toml crate.

Most importantly, my trace logs no longer start with 50+ lines of "thread X is waiting for a task to do"; every thread is getting work within ~50µs. I wouldn't have spent quite as much time on this optimization if the logs weren't bugging me so much.

You can find the updated code here. The main reason for the speedup is probably the simpler control-flow and the reduction in memory allocations.

Pending any other tangents, my next step is to get back to parsing source code.