@ekaitz_zarraga
Not sure for C but at least C++ is an ambiguous language and can't be parsed in "one pass" only.

And your compiler must handle specific platform/architecture/settings/api/abi/processors/etc.. etc..

Because a simple 'Hello world' in C can greatly vary between Unix/Windows OS for exemple, doing a lib/dll/a/so too, doing optimization/debug for the code your compiling, etc...

And because of retrocompatibility.

@suetanvil

Follow

@Xipiryon

Actually, I think that this is the real answer to the @ekaitz_zarraga's question.

I don't know about , but GCC's (huge, overwhelming) complexity is mostly due to the supported combinations of

- languages
- architectures
- operating systems
- optimizations
- diagnostics / debug
- internationalization

Reading this from top to bottom might give you an insight: gcc.gnu.org/install/configure.

is not just a compiler but the Compilers Collection.

It tries to maximize the possible use cases, including several niches and corner case that are simply not considered by simpler C compilers.

Why?

Well, there is obviously an ideological aim: providing through an high quality compilers suite to everybody, no matter how peculiar are their needs (to reduce the attack surface from proprietary software).

But there is also a reasonable architectural goal: maximize the reuse of a large high quality code base that is common among the various combinations of need.

The price of this is a huge complexity, due to the tensions between different perspective on how computing should work.

I don't like such complexity (really, I hate it), but it's very short-sighted to blame it without understanding the overall vision that GCC pursuits.

@suetanvil@mastodon.technology @codewiz

@Shamar @Xipiryon @ekaitz_zarraga @suetanvil I used to think compilers were complex, now I don't.

There are complex concepts, complex algorithms and complex data structures...

...but their *structure* is quite simple. They're essentially the ideal program for a computer scientist: the kind that takes just one input (the source code), generates just one output (the object code), and is 100% reproducible because it has no internal state (purely functional).

@Shamar @Xipiryon @ekaitz_zarraga @suetanvil Even if modern compilers boost performance with pre-compiled headers, threaded passes and incremental linking, all these things are optional and well hidden in the high-level structure, which is very very modular.

You can essentially enable and disable every single optimizer, and dump the intermediate code in between them to debug and test.

@Shamar @Xipiryon @ekaitz_zarraga @suetanvil Trust me, compilers are the *dream* for any test engineer. With kernels and network services, you don't get that lucky: you get concerns like timeouts, security, race conditions, realtime issues...

@codewiz @Shamar @Xipiryon @ekaitz_zarraga @suetanvil I've read that LLVM has a developer tool for automatically narrowing down in which optimize pass a bug occurs...

@alcinnz @Shamar @Xipiryon @ekaitz_zarraga @suetanvil Ah, cool. It probably works like git bisect, but switching optimizations rather than commits.

@codewiz

I'd almost say the opposite: there are simple concepts, simple algorithms and so on, but their structure is insane.

You can build GCC on an x86 Linux (glibc) so that it will builds statically linked binaries for Windows x86_64 (newlib-cygwin) on a AArch32 running NetBSD (for several languages).

The simple fact that Canadian crossing is possible and supported should give an insight about the internal of .

With GCC you also build `libgcc` a library against which each GCC built binary is linked to ease some optimizations. This is another hint: compilers are not as simple and modular as one might think from an high level description of them.

Finally it's not entirely true that you can disable every optimization: not only because there is no real difference between optimizations and other transformations during the compilation process but also because most of the combinations of optimizations have never been really tested.

So I'd argue that for a tester, modern compilers are the one of the worst possible nightmares of today computing.

Yes, they are functional, but I'd guess nobody would live enough to seriously test each possible combinations of options of a single GCC release to ensure it maps each possible input to the correct output.

@Xipiryon @ekaitz_zarraga @suetanvil@mastodon.technology

@mathew

Apparently nobody really knows why it's called this way.

A "Canadian cross" compilation is simply the cross compilation of a cross compiler.

Indeed, during configuration you can specify 3 different systems:

1) the `build` system, where the compilation is going to run
2) the `host` system, where the produced compiler is going to run
3) the `target` system that will run the binaries produced by the produced compiler.

en.wikipedia.org/wiki/Cross_co

@Shamar Ah. Couldn't find any explanations by searching the web.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.