I really ought to be writing some code, if not for my job interview, then at least for my up-coming SVFIG talk. But with my headache, I am the intellectual equivalent of a jellyfish right now.
@saper Range checking is a separate issue. YES, I want range checking. (I also want the option to selectively turn that stuff off when I want to as well, but I digress.) But I see that as an attribute of a more expressive type system, and in my view has little or nothing to do with how pointers are expressed at run-time.
Unless we're talking about something else? Maybe I missed a cue somewhere.
@saper I'm not sure how this relates. Real-mode segment registers are just base address registers, and they referred to the same physical address space.
Different *classes* of address spaces didn't come into vogue on x86 architecture until 80286's Protected Mode, which allowed 16-bit selectors to serve as *handles* into a descriptor table, which then had a 24-bit base address pointer along with memory protection bits associated with them.
@saper If one wanted to prove the point, one could literally implement an Oberon compiler for a 64-bit virtual address space, but use 16-bit precision for pointers. Yes, you'll be limited to just 65536 distinctly different records or arrays, BUT, those individual records/arrays can be gargantuan.
@saper C can't handle this because pointers must obey the laws of addition (they must be commutative, monotonic, etc.). This requirement does not hold for Pascal, Modula-2, or Oberon. All they have to do is point at something; they're a run-time generated name for something else in their stead. That's all they're good for.
@saper I never programmed with Turbo Pascal; only with VAX Pascal. I didn't get to use a Wirth-authored language again until Oberon System, which by then was targeting a 32-bit flat environment.
However, I did not have bounds checking in mind at all.
The fact is, a segment register in real-mode is simply (base_address >> 4). That's it. It is literally a 20-bit pointer stuffed into a 16-bit word.
@saper No, I'm not referring to that either. I'm literally *just* referring to that a pointer points to the head of something. Never within said something. Never beyond said something. Only *at* something.
@saper Contrast this with a language designed by Wirth or any of its intellectual descendents: the only things you can do with a pointer is assignment, test for NIL, and dereference. That's *it*. A[B] was syntactically and checkably different from P^.
This distinction allows the compiler greater freedom to implement pointers however it makes sense to on the target hardware. C's approach *requires* addresses to approximate a number line, because of the array/pointer equivalence.
@saper No, the problem is that A[B] in C is equivalent to *(A+B). C does not check that your expression in any way makes sense. Consider that these two expressions are completely identical:
struct { int foo; } bar[32];
int i = 5;
printf("Approach 1: %d\n", bar[i].foo);
printf("Approach 2: %d\n", i[bar].foo); // HUH?
That you could trivially exceed bounds, or concoct weirded out statements like this meant that segmentation could not be used.
@DistopianK@h Right, which is why I mentioned building the OS around CL. But, RISC vs stack architecture CPUs (the latter of which comprised many processors for Lisp machines back in the day)? I'm not sure that distinction matters anymore.
Which brings me to FOSH hardware; you can get (for example) a picoRV32-based system running quite inexpensively on an off-the-shelf FPGA board, running a modest yet respectable clock speed. The difficulty will be in porting the host OS for it, I think.
@saper No, lots of stuff wrong with allowing pointer arithmetic. It's not based on cells; it's based on the type of the aggregated item, which can be type-cast at will to anything, anywhere.
@DistopianK@h How do you define a "Lisp machine," though? A generic computer running Genera? Or a machine running on top of a processor whose instruction set is specifically tailored for executing Lisp?
If the former, then any open source computer could be suitable for this task. It just requires the effort of porting Common Lisp to it, and building an OS around said Common Lisp implementation.
@h This really nicely illustrates why Plan 9 is so superior to Linux here. Plan 9 is basically containerized out of the box; it's impossible to run anything *not* in a container. It's intrinsic in how Plan 9 works.
Here, we see how Linux tries to do the same, largely succeeding, but you have to be really persnickety about the fine details.
Docker would never succeed on Plan 9; there's no market for it. On Linux, it is a band-aid on top of the worse-is-better approach.
If you're a programmer who likes the functionality that Docker provides, but you can't stand the bloat, I can't recommend this talk by Liz Rice enough. Liz's "Containers From Scratch" presentation is a classic already. One badass classic for #golang and for hacker culture as well.
@grainloom@jjg@h Also, Rust's standard libraries makes heavy use of parametric polymorphism (templates), just as Boost does for C++. You're going to pay for that as well, even if you compile for production and strip the executable.
You should find as your project grows, though, binary sizes won't grow as much. That's been my experience, at least so far.