I’ve read over the docs about separate compilation.
I am going to need to restructure quite a lot, and go over my code with a fine-tooth comb.
Separate compilation, as a concept, has never come up in my life before, and I—somehow—have been writing working code (in C, C++, and TADS 3) this entire time while accidentally never getting errors from it.
For any of you who have not seen my source code yet, my makefile has only one source declaration.
I mean… putting everything into a single compilation unit like that is a valid approach too. Separate compilation exists largely to speed up the build process because you don’t usually have to rebuild the entire project, only the parts that have changed.
For instance, C++ has especially long compilation times: around 2000 it was trivially easy to have smallish projects that still took upwards of 30 seconds to compile, and big commercial projects were the kind of thing where you’d work during the day and leave it to compile over lunch or overnight and then come back to see if your code compiled.
Separate compilation may have some privacy benefits: this module can’t see the internals of that module because they’re not compiled together. But in IF I feel like a lot more stuff is global anyway: the real world is global, it doesn’t have some aspects exist only for some actors and objects.
So sure, you may want to restructure. But if your compilation times aren’t painfully slow, I don’t think there’s anything really wrong with your approach…
Well, the problem I’m having (mostly) is I have a giant chunk of code that I am separating into a module, which will be reused in other games, but I want to keep this module in one spot, and import it using the makefile. If I have multiple modules to import through the makefile (with overlapping dependencies), then it’s looking like the separate compilation structure will be a lot more compatible, unless I’ve misunderstood something.
I’m just really sick of importing modules with symlinks or just deep-copying them into a project.
As someone who hyper-aggressively splits things off into modules, I just have a working directory that all the individual modules live in, and then just add that path to the project’s makefile via -Fs [path to top-level modules directory], which then means each module can be added via -lib [module directory name]/[module tl file name] (which is usually just the module name twice, i.e. -lib simpleGraph/simpleGraph, the .tl being optional and me being very lazy).
Each module’s *.tl is just a list of all the *.t files in the module along with the module name (like the example I posted above).
It’s a little bit of bookkeeping for each module, but it’s all pretty straightforward. And at this point I’ve done it so many times I just have a module template, so whenever I’m creating a new module I just make a new copy of the template, do global search and replace the placeholder text for the real module name, and that’s the majority of the bookkeeping done.
It’s always frustrating when you find out that you’ve been doing something “wrong”, but the bright side is a) you now know better, and b) it’s an indication that you’re not digging in your heels and refusing to learn or change what you’re doing, which is a trap it can be easy to fall into.
On the one hand looking at a bunch of work you’ve invested a lot of time into and realizing “I should’ve done this a different way” is aggravating as hell but on the other hand it also means “I’ve learned something and so now I’m better at this than when I started this project”. Which is something you should always be hoping for.
So I’ve been mulling this over a lot lately, and I’ve decided that game projects themselves are better off with a TADS 2 structure, and reusable modules are better off with TADS 3 separate compilation.
Mostly because a large game project actually becomes more disorganized and difficult to manage if it also follows separate compilation. At least in my opinion.
So I’ve been informed that in TADS 2, the convention was to use #include "sourceFile.t", instead of using -source sourceFile.t in a makefile. As a result, TADS 2 games became a tree of include statements to wrap all the source files into your game.
I wasn’t there to know about TADS 2, but the way my games are organized seems to be called a “TADS 2” structure.
ALSO UPDATE: I’ve reorganized my reusable modules, fixed my TADS 3 tools, and updated all of my projects to use the new organization system!!
In practice you end up with a file that contains a long list of all the *.t source files in the project. What’s the organizational advantage to putting that list in another *.t file and then #including it, versus putting it into a *.t3m file that gets used as an argument for t3make?
Not arguing here, just really not seeing the advantage.
It’s organized more like a tree in I Am Prey, where I can organize code proximity by relevance and reuse source file names within multiple directories, for when several parts of the project have similar ideas in name only but require very different executions.
Also, for stuff that doesn’t qualify for reusable code, but also has templates or define statements that are only relevant for one single sub-part of the game, it’s easier to keep organized if it can be put in a nearby (or the same) source file, rather than a top-level project *.h file.
The facility map, for example, has a *.t file that includes code that the map rooms specifically will be sharing, as well as includes for each room’s *.t file. This map-wide *.t file is then included in the top-level gameMain.t source file. For more complex rooms, there might be multiple *.t files, and I would like to allow for the reuse of names for rooms that have analogous features.
Dumping all the *.h-relevant stuff into one game-project-wide file makes it feel like I’m keeping multiple fractions of one intellectual unit scattered across opposite ends of an organizational space, which is weird from a planning and onboarding-future-me perspective.
Doing this for libraries is necessary to allow smoother operation of the makefile, but it doesn’t really seem to matter for games—which are “top-level”—and the game-level organization should prioritize the dev’s expectations instead of the compiler’s expectations, if the dev is to prioritize readability and the ability to take a pause and return to the project later.
Case in Point:
One of the major issues I had when trying to figure out how the code for Adv3Lite worked (when working on my pronouns module) is that the separate compilation structure made the entire codebase seem almost interleaved and scattered, causing me to make notes of multiple line numbers across multiple source files in order to behold one functional unit of the library, and the discovery process of this took a day alone.
I don’t want to do that to my future self if there is a project I need to take a break from when life stuff gets in the way. Organizing it with an include-facilitated tree structure allows me to relearn whole, complete functional units one at a time, which can cut down my dev time by quite a lot.
Yeah, the fact that the compiler only cares about the basename of the file instead of the full path filename when uniquifying filenames feels like a borderline misfeature. Although it does make filename references in error messages less ambiguous.
But I don’t see the advantage of spreading out file dependencies the way you’re describing. At least if you’re not going whole hog into breaking up all the individual functional units into modules or something extreme like that. Because the code doesn’t care about that kind of thing…an individual *.t source file doesn’t need to have an #include for objects/classes/whatever it references. Just #defines and template statements. And those (and pretty much nothing else) should go in your *.h files. Then the *.h files are the things that you’re selectively #including in individual source *.t files, based on whether or not they reference one of the #define or template declarations. Although you can absolutely get away with a global header file in most cases, unless you’re doing something really exotic (like using different templates for the same class in different parts of the project.
In terms of code maintenance, doing things the “T3” way has the advantage that you don’t have to worry about the “shape” of #include dependencies…because it’s basically always just a flat list in the makefile. Because that’s all the compiler cares about, and therefore that’s all you have to keep track of when you’re coding: when I’m compiling this thing, is the class/object/whatever this bit of code is referencing anywhere else in the project? If yes, that’s all you have to worry about. If no, then you need to add it. Where? To the makefile. You could stick it into an T2-ish chain of #include dependencies…but then you’re basically just adding new rules you’re going to have to keep track of.
With #define and template statements, those aren’t for the compiler, they’re for the preprocessor. So those are always going to be “local” to the individual source file…because they’re not compiled symbols/objects, they’re just substitutions that the preprocessor has to take care of before the source is handed to the compiler. That’s still going on if you wrap everything in a big #include chain of *.t files…you’re just making the preprocessor treat the entire project like one giant source file that it has to twiddle all at once.
Again, really not trying to argue with you or talk you into/out of anything. Your code is yours, and the best kind of coding practice is the kind that you can get releases out of. So philosophical questions aside there’s absolutely nothing wrong with something that actually works for you.
But I’m trying to be persuaded here and I’m just not seeing it.
You have stated your arguments, but the arguments you stated were largely the exact problems I identified as needing solutions, so I’m not sure there’s any version of this conversation where you’ll be persuaded, but I can try.
Condensing file dependencies seems to spread out logical organization dependencies as a necessary side-effect, especially because it enforces a strict separation of what is allowed in a *.t, which makes it much more confusing and difficult to relearn a project, abstraction layer, or functional scope.
This organization system is for how my human brain learns and categorizes the project structure, not for how the compiler does. When you have ADHD, structuring a project to allow your future self to stumble through relearning it is critical to the project’s success, even if you need to relearn it a week from now.
Correct. That’s the problem I am solving.
Also correct, but that’s also the problem I am solving.
If you are still in-progress on your project, are aware of its contents, and do not have severe ADHD, sure. If you are switching to a different organizational cluster, or are opening an old project to relearn it, the #include statements tell you where to read next, like a tutorial or tour guide. If you can only work in one focused, abstracted logical cluster at a time (because of ADHD), then the makefile is an unknown colossus for now; you do not recognize the makefile. Part you once did, but that part of you is gone until you’re done working on this cluster.
So after you sort/group/cluster your relevant #includes at the relevant abstraction layer and in the relevant logical group, then your brain can release that, and becomes free to observe the abstraction layer of the makefile, but you lose focus of the abstraction layer of the code you were just working on. The next time you visit the code, you will need to have the relevant #includes, templates, and #defines nearby to relearn and understand what needs to be done.
Comments can do this, too, but if the shape of the dependency tree can do that too, then you can have fallbacks for when you blank something later and didn’t think it needed to be commented at the time.
The #include statement is not added to the project last; it’s usually added first. Code grows outward like a tree, instead of in a separate unit that is later attached. I don’t need to know what’s in the makefile or what has already been given an #include if the the structure comes first.
Again, just so I’m clear: I absolutely would not use this for modules. This is for top-level game code.
I have a process now for separating out game code into a reusable makefile module, if necessary, but code creation needs to happen using the tree structure first, for me to successfully produce it and make a module as a second pass.
“Because I have ADHD and this strategy scratches a particular itch for me” is a perfectly good reason. And it’s probably a more coherent rationale than the majority of coding standards.
And while I don’t (as far as I know) have anything like ADHD, I totally understand the desire to leave a trail of breadcrumbs for the future version of myself that is going to have to decipher my code, having forgotten most of the details I had at top of mind when I wrote it.
I usually use project layout to do most of this: path names, file names, and naming conventions (for classes and so on) that are re-used across projects. I also tend to map these fairly directly to my workspace(s), so information about what kind of thing is where is also something I can read “geographically” after the workspace is set up. Inheritance moves from screen left to screen right, for example; editors bottommost, source viewers above; far upper right is are two paired xterms one (left) for compiling and the other (right) for test runs; and so on.
Not trying to pitch the idea or anything here, just talking about how I approach the dreaded “what the hell was I thinking six months ago” programming problem.