Grubby Hands on Loading OSs

The “Grand Unified Boot Loader” is pretty cool; albeit non-essential when there are alternatives out there to boot your OS such as LILO, ELILO, reFIND, and so on. I recently had the unfortunate pleasure of trying to get Grub v.2 to work on my Slackware box and didn’t find much help. I’ll try to make this short and sweet:

You have a UEFI / Legacy BIOS bootable system; cool! You can load multiple OS’s no matter how they are configured as long as they are configured correctly; awesome! But the unfortunate requirement of all this is that you can only load them from the BIOS menu; damn!

Why is this?

You cannot mix and match boot configurations inside of a boot loader (generally) so the idea when configuring something like grub and friends is to try to be consistent along with your other partitions. 

Ok, I have a GPT; I can do a legacy install of grub2, can’t I? 

As was my case; it depends entirely on your motherboard. I tried endless configurations to get my Asus Maximus VI Gene to play nice and properly load Grub2 on my GPT partitioned SSD of 250GB but absolutely nothing would work. The answer? Go back to MBR and do a basic legacy boot. My assumption here is that some legacy BIOS boot loaders aren’t configured to look for GPT properly, albeit grub2’s setup with a bios_grub partition and the like is supposed to properly alleviate such a problem.

Alright, well, I don’t mind having one that’s UEFI and the other as MBR; I’ll just use the BIOS because I don’t switch that often.

UEFI is cool, but what benefit do you get? UEFI and GPT go together well thanks to how system designs are going these days; GPT supports higher number of partitions, greater than 2TB in size drives, so forth. If you are a Linux user who likes to separate their directories into separate partitions and go greater than 4 partitions, than maybe GPT is the way to go. Otherwise, if you’re like me where I’m using at max maybe four partitions and my main drive is topped at 250GB, there’s really no advantage to having a GPT formatted drive. If you do end up using GPT, then UEFI seems to the wiser, easier way to go, but you’ll have to make sure you have all your other OS’s and drives configured for UEFI boot or you’ll never get the chainloader to load them properly.

How do I do this?

I don’t care about covering this in Arch-Wiki-like detail, so I’ll just say what I went through:

In my case, grub2 kept linking my EFI listing to the wrong partition (2, which was my root drive) when it should have been linking to 1, my EFI partition. After reissuing the link to the proper place, it worked, but since my other OS’s are installed legacy with an MBR partition table, I had no other choice but to convert my GPT setup to MBR via gdisk (in gdisk, under recovery and transformation menu, there is a command for converting GPT to MBR. If everything is in it’s right place and you have some room at front of the drive you shouldn’t have an issue converting. That said, any of the advice I’m giving here is done at your own discretion and I offer no warrant to it working safely or working at all. That also said, a lot of this information is scattered about in the form of various stack exchange QandA’s as well as Linux distro wiki’s so I thought I’d consolidate the knowledge someplace central in case other people seem to have the same issue. 

A Less-Random Generator

I was going to do a write up involving PRNG’s and randomness in games but I think this article sums up the notion quite nicely (and ending on a simple-enough to understand example of a shuffle bag algorithm implemented in Python.)

Plotted Random Points

Every reader [of the SICP] should ask himself periodically “Toward what end, toward what end?” — but do not ask it too often lest you pass up the fun of programming for the constipation of bittersweet philosophy

Structure and Interpretation of Computer Programs

I don’t think they meant this.

I don’t think they meant this.

Scottish game developer Chris Sawyer originally wanted to create a sequel to his highly successful Transport Tycoon, but after becoming obsessed with roller coasters, he changed the project into RollerCoaster Tycoon. Sawyer wrote RollerCoaster Tycoon in x86 assembly language, which was rare for a game published in the late 1990s. Some functions were written in C for interaction with the Windows operating system.

Not Book nor Concept.

I recently read two amazing articles on how reading code is not akin to reading literature and is more like exploration or dissection. I’ve also taken a stab at compiling my own kernel in my temporary Debian distribution I had mentioned in previous posts. 


I bring up compiling one’s own kernel because it is not as uncommon in the Linux community as one might think but it is questionable how many actually ‘hack’ at the kernel to see what its code looks like. Keep in mind; I haven’t either.

I have, though, learned that in most code-viewing situations, unless the code is non-trivial and idiomatic, it is generally a definite process to explore the finer workings of a code, let alone get a firm, clear, grasp of what the code does and how it does it. As James Hague put it in the above article…

"I think that’s the only way to truly understand arbitrary source code. To load it up, to experiment, to interactively see how weird cases are handled, then keep expanding that knowledge until it encompasses the entire program.”

And the same is true for games: puzzles in games aren’t just opaque problem solving; sometimes there is a method to the madness, but a lot of times there is a great deal of ‘exploration’ involved. Exploration in games can take two forms; it can exist as a means to literally explore a space and it’s contents as well as figuratively meaning to explore all the possibilities (per self-chosen algorithms the user employs by him/herself) of a given complication.

For a long time I have felt that using walkthroughs is a means of cheating oneself (unless you’re pressed for time and really just love stories) but this wasn’t always true. When I was younger, reading walkthroughs was almost essential for all the games I played; and a lot of times this was just because I wanted to get to the end of the story but a lot of other times it was because I couldn’t figure out for the life of me what to do. I’d go to a walkthrough and see that some item I needed I didn’t pick up, so I’d trek my character back to that place and find that it’d be hidden in a blob of similar color. There were other games where timing and guessing the right actions was a crucial factor in the game, but lead to insane amounts of frustration and little reward (The Black Cauldron, I’m looking at you!)


When trying to solve a codebase that is ‘non-trivial’ (as you’ll see others peg the term, denoting that there’s no money or job at stake) if you do manage to explore and reach an understanding of it, there is a great epiphany that occurs, and this is the sensation patient ‘exploratory’ puzzle solvers quest after. They know a result, and the epiphany that follows, is there, and that they are squandered if they go the cheap route (a la walkthroughs). 

But what is a puzzle? For the purposes of this article, I want to define a puzzle as any course of exploration (sequence of choices driven by intent or arbitration) that leads to an epiphany, challenging or not; so if finding a new clue from dialogue options helps us discover a new location and a sense of greater understanding (both of these can be nested under the umbrella that is the term ‘epiphany’), then this dialogue process would be a puzzle. 

To add to this; people can only take so much frustration when dealing with our self-defined version of a puzzle. People want to get to that epiphany, but they don’t want to fight tooth-and-nail to do it (unless they’re having fun in the process!) This is something the Half-Life series did amazingly well and I ascribe as it’s leading factor to success. 


Exploration doesn’t have to be purely silent, narrative-driven scenes a person meanders through: it can be a more engaging, direct, vivid experience for the user. Exploration, however, doesn’t bode too well with games of the competitive genre, but with a little bit of poetic freedom, our definition of puzzle could be perceived as the epiphanies one gains in competitive matches like chess (or RTS) players noticing great opportunities to be taken advantage of. 

A Week (give or take) with Debian.

A recap:

  1. Installed Debian wheezy (stable) with a netinstall disc. 
  2. Changed to Jessie (testing), but broke apt-get and my video drivers in the process.
  3. Updated the kernel to 3.14-1-amd64, thinking this would help: it did not.
  4. Wrestled with apt-get (with numerous calls to apt-get <install / remove / upgrade / dist-upgrade> -f) and my video drivers (nouveau module vs the nvidia-driver debian package conflicting with one another) and now the system is working and I have tested numerous flavors of windows managers which I’ll touch upon in future posts. Currently I’m using KDE. 

At the moment, however, I’ve turned my attention to compiling a Linux kernel and what I thought might be a somewhat simpler experience has really become a larger, several day project. There is an insane amount of options, even in the graphical varieties of configuration (one that uses ncurses and another that uses X11).

Some options can really be stumping for someone who has never hacked a kernel before, but can also be really enlightening if one manages to dig up enough sensible data from the internet. I would venture to say that someone on the beginner to intermediate range of things could compile a kernel on their own, but there are some essential choices that may or may not be default.

When I have it finished and running, I’ll return back with thoughts and considerations.

Finally, perhaps the most interesting difference is that all of the packaging tools are implemented as simple shell scripts. This greatly lowers the barrier of entry to tweaking them, bringing it within the bounds of a user with intermediate to advanced Linux knowledge, rather than a Linux ‘Guru’. For example, prior to the change of default Slackware package compression format (from gzip to xz) it was not uncommon for some Slackers to hack in support for alternative compression formats themselves. How many deb or rpm users (not developers) do you know who have hacked and adjusted their package managers to better suit their own personal needs or tastes?

I think the point to take away from all this is that one of the upsides of using a system like Linux, is that it is supposed to empower users and give them greater control of their system. In this regard Slackware’s package management tools excel.

Ruari’s thoughts on package managers in regards to Slackware.

4D Puzzler “Miegakure”

Persistent Changes or Persistently Changing?

As an update to my last post regarding further adventures into Linux, I’ve tried quite a few distros, a small smattering of software, attempted to tackle an ethernet driver issue, and pondered the requirements of a persistent-change live-USB distro (caveats notwithstanding). 

First let’s tackle considerations for the live-USB distribution. What are the fundamentals of a live-USB distribution of Linux?

  1. Flash drive health: this means keeping as many writes and erasures away from the drive as much as possible, both in the regards to swap space and in regards to files written to the drive.
  2. Limitations from alternate architectures: this means implementing a more ‘standard’ processor architecture such as i386 but keeping both low, and high, amounts of memory in mind. High amounts of memory will need a kernel that is PAE enabled, or something similar, in order to access the 4GB+ that a system might have. Low amounts of RAM will mean being very conscious about what is being kept in memory in case our physical RAM is very low (say 1-2GB; for some recycled public computer). 

To maintain most of these requirements, we create a temporary filesystem in actual physical RAM with tmpfs, then when we, periodically or at shutdown / reboot, write to a file that contains a whole filesystem that we can simply ‘tack on’ to our main root tree. When we write to the file, we no longer keep the stuff we were keeping in memory, but we have to figure out a way (of which I don’t know yet) to instigate deletions onto the written filesystem. We can make all these different, mounted filesystems appear as one with tools like UnionFS or aufs; apparently aufs allows for runtime writes to the flash drive and UnionFS does not, although I haven’t done enough scrounging to verify this. 

Now onto the quick and dirty exploration of distros:

There are a lot of simply standard, functioning distributions out there. For someone who is even somewhat computer savvy and has the time, I would recommend a ground-up approach using Debian, but an amazing alternative is a distribution called Crunchbang I became very fond it; it takes the Debian experience and layers it on with a functioning, simple, easy to access (and ever-so-easy to modify) OpenBox experience.

However, Crunchbang’s 3.2.0-4-amd64 kernel it implemented for the time I spent on the ‘live session’ did not recognize my ethernet card. Try as I might to modprobe (a command to find and insert modules, which are akin to drivers) the intel e1000e ‘driver’, it would not do any automation or binding to my ethernet hardware and after many fruitless hours of labour, I learned quite a bit but still never solved the issue. Being unable to patch a live-USB distro that doesn’t save persistent changes (I didn’t try, this is just an assumption because of the odd FAT32 initial folder structure and how something like apt-get or even make install might miss putting things in the right place), I simply assume it’s because the intel e1000e drivers it comes with are out of date. That said: I still love crunchbang and after a full install a minor issue like network connectivity shouldn’t be an issue: I will verify this soon.

Lastly, Ubuntu and Elive have taught me a thing or two about what makes a wonderful desktop environment. Sometimes, as they say, you don’t realize what the good stuff is until you see the really bad stuff, and Elive did just that for me. Elive was functional, but so full of little embellishments that I felt like I was being conned by the snakeoil salesman and his petty fireworks. Ubuntu, on the other hand, got out of my way, and went out of it’s way to make sure I was gently carried along in my goal to function within an OS, rather than learn an OS. Elive was constantly trying to get me to like it by being clever and impressive, but Ubuntu simply tried helping me get what I needed to do, done. As I said in my initial postings about Ubuntu, however, the Richard Stallman freedom fighter in me felt somewhat appalled by the inclusion of ads in Ubuntu’s dash search function, but it isn’t impossible to render this point null and void. 

If I was personally to go about having an Ubuntu install on a system of my choosing, I’d go with a few initial, immediate changes.

  1. Remove ads from Dash search function: this can, supposedly, be done via the above link or by simply disabling the function from Ubuntu’s settings pane (some have checked in etherape, a graphical network monitoring tool, to see if Ubuntu’s disclosed, official way of removing the ads is effective and this appears to be true although I myself have no tested it.)
  2. Install Synaptic package manager as a replacement for Ubuntu’s built-in package system (to avoid purchase-able items in their store, which is exactly why I didn’t want Linux for).

My next adventure will involve a full install of either Slackware or Ubuntu on my SSD and getting everything configured and working.