The elegantly designed computer systems of Guardians Of The Galaxy
For those that know me, it’s kind of a family tradition to use Slackware. If you look at my previous posts, I explored some other distributions and I finally settled on Slack because of it’s vanilla nature and rock-solidness. As a design philosophy, I’m all for it, mostly because it works and if things break, I don’t have to go wading through a plethora of options to make them do what I want.
Take Ubuntu; I love ‘out of the box’ distributions and OS’ as much as any average person would, but there are things a non-standard user can find really annoying. Ubuntu’s unity desktop isn’t really that fast on your regular laptop platform. Sure, laptops are probably going the way of the trash bin in favor of touch devices, but where there are computers, there are developers for computers, and I still wager that developers would prefer a proper keyboard layout to a touch screen device (at least until we can have some crazy tactile interfaces like in Minority Report)…
I decided unity was too slow and switched to i3; its principal advantage for me was the bliss of being able to switch particular windows to a floating format which some applications absolutely require (gimp, certain preference panes, etc.) Other than that, a tiling format is perfect for a smaller resolution like on a laptop; there’s no time wasted rearranging things and making sure stuff isn’t covering other stuff.
Unfortunately, patching things up in Ubuntu is a lot harder and I assume this is because of the layers of crud sandwiched between the user and the distribution’s gorey details. I’m still struggling to change lightdm’s background, whereas other (read: more transparent) applications allow for quick, easy changes. It’s one of the reasons I like i3 and Openbox so much.
Some people really like transparency, and it is difficult to program into an application. It’s always easy to hard code some things, or to try to bury details under a mountain of separation layers to enforce the ‘don’t break it’ mentality for the average user. For those that use Fedora or Ubuntu, they know most users will never leave Unity. But I shouldn’t have to change distro’s just to get things to do what I want, and people who go to Xubuntu or Kubuntu just to have a different window manager are either highly ill-informed or missing the point.
There are a lot of nice things that Ubuntu does offer, and in particular it’s a good variety of support and it’s package resources (which are fundamentally Debian APT repositories, with some Ubuntu-centric ones in the mix). There are times I start thinking about making the switch to a cleaner, simpler, less “in my way” distribution that favors customization such as Crunchbang, though, if it wasn’t for the fact that I’d have to spend a whole week setting it up the way I want it from the ground up.
Windows managers, though, are interfaces just like anything else, and for some applications productivity is the aim. In a game, not so much, so transparency isn’t really needed, but there is a double-edged sword here: imagine the world of modding. If some games weren’t as open for modding, they may have never relished in the success they had; Half-Life, Unreal, Skyrim: the list goes on for quite a number of titles. What does this mean? It means you’ve given a community the power of replay-ability, to keep using your tool in ways you never thought imaginable, which is a core design principle when it comes to interfaces: you may design for a particular demographic, but a whole slew of others may end up using your product, perhaps not even the one you originally designed for! Ultimately, designing for transparency is a fundamental design principle in the right contexts and I feel the favoring of ‘keeping it safe’ for the average user actually ends up impeding the average user when it comes to something as simple as changing a background…
Grubby Hands on Loading OSs
The “Grand Unified Boot Loader” is pretty cool; albeit non-essential when there are alternatives out there to boot your OS such as LILO, ELILO, reFIND, and so on. I recently had the unfortunate pleasure of trying to get Grub v.2 to work on my Slackware box and didn’t find much help. I’ll try to make this short and sweet:
You have a UEFI / Legacy BIOS bootable system; cool! You can load multiple OS’s no matter how they are configured as long as they are configured correctly; awesome! But the unfortunate requirement of all this is that you can only load them from the BIOS menu; damn!
Why is this?
You cannot mix and match boot configurations inside of a boot loader (generally) so the idea when configuring something like grub and friends is to try to be consistent along with your other partitions.
Ok, I have a GPT; I can do a legacy install of grub2, can’t I?
As was my case; it depends entirely on your motherboard. I tried endless configurations to get my Asus Maximus VI Gene to play nice and properly load Grub2 on my GPT partitioned SSD of 250GB but absolutely nothing would work. The answer? Go back to MBR and do a basic legacy boot. My assumption here is that some legacy BIOS boot loaders aren’t configured to look for GPT properly, albeit grub2’s setup with a bios_grub partition and the like is supposed to properly alleviate such a problem.
Alright, well, I don’t mind having one that’s UEFI and the other as MBR; I’ll just use the BIOS because I don’t switch that often.
UEFI is cool, but what benefit do you get? UEFI and GPT go together well thanks to how system designs are going these days; GPT supports higher number of partitions, greater than 2TB in size drives, so forth. If you are a Linux user who likes to separate their directories into separate partitions and go greater than 4 partitions, than maybe GPT is the way to go. Otherwise, if you’re like me where I’m using at max maybe four partitions and my main drive is topped at 250GB, there’s really no advantage to having a GPT formatted drive. If you do end up using GPT, then UEFI seems to the wiser, easier way to go, but you’ll have to make sure you have all your other OS’s and drives configured for UEFI boot or you’ll never get the chainloader to load them properly.
How do I do this?
I don’t care about covering this in Arch-Wiki-like detail, so I’ll just say what I went through:
- Format your SSD for proper alignment. I have an unusual SSD so my method involved a little bit more work but cgdisk makes this simple enough. cgdisk partitions drives for gpt, however.
- When you’re done and happy with the layout of your drive, do you’re own formatting with mkfs (rather than your linux installations) to make sure the SSD is as properly aligned as you like. Remember that you’ll need a 512MB sized EFI partition (ef00) somewhere at the beginning of your drive. Remember to turn your EFI partition into a Fat32 formatted partition (mkfs.fat -F32).
- Mount it someplace you like; you could make it you’re whole boot directory since it’s relatively large in size, or you could nest it inside of the boot directory. Whatever you choose, put it in your fstab for later.
- First, make sure you have grub2 and efibootmgr installed. Slackware comes with both of these.
- Two steps:
- mount -t efivarfs efivarfs /sys/firmware/efi/efivars
- modprobe efivars
- These may not work if you didn’t boot via EFI. You can do this in a bunch of different ways; I used the slackware boot USB that was provided during installation.
- install grub:
- grub-install —target=x86-64-efi —boot-directory=/boot(or /boot/efi, or whatever you chose) —boot-id=(whatever you want to be the name in your EFI list, like slack_grub or Linux) —recheck —debug
- At the very end of all the text that spits out on the screen, you should see what command efibootmgr was given to create the listing. If you run efibootmgr “blindly” you can see a listing of all UEFI bootable devices and your id you provided should be in there.
- PAY ATTENTION: If you see that efibootmgr was given a directive to “-p N”, N being a number, to some number that isn’t the partition your efi is on, you’re going to remove you’re old listing it created and create a new one. Copy down the exact command that grub used with efibootmgr.
- efibootmgr -b N -B; N being the number that is associated with your listing (say B0000 Linux, the far right number is the listing number, so we could issue efibootmgr -b 0 -B). This will remove your listing ID but don’t worry, that’s why we’re making a new one in the next step.
- Now enter, verbatim, the efibootmgr command that grub2 used, but change the number after -p to the partition you know your EFI partition is on.
In my case, grub2 kept linking my EFI listing to the wrong partition (2, which was my root drive) when it should have been linking to 1, my EFI partition. After reissuing the link to the proper place, it worked, but since my other OS’s are installed legacy with an MBR partition table, I had no other choice but to convert my GPT setup to MBR via gdisk (in gdisk, under recovery and transformation menu, there is a command for converting GPT to MBR. If everything is in it’s right place and you have some room at front of the drive you shouldn’t have an issue converting. That said, any of the advice I’m giving here is done at your own discretion and I offer no warrant to it working safely or working at all. That also said, a lot of this information is scattered about in the form of various stack exchange QandA’s as well as Linux distro wiki’s so I thought I’d consolidate the knowledge someplace central in case other people seem to have the same issue.
Every reader [of the SICP] should ask himself periodically “Toward what end, toward what end?” — but do not ask it too often lest you pass up the fun of programming for the constipation of bittersweet philosophy
Scottish game developer Chris Sawyer originally wanted to create a sequel to his highly successful Transport Tycoon, but after becoming obsessed with roller coasters, he changed the project into RollerCoaster Tycoon. Sawyer wrote RollerCoaster Tycoon in x86 assembly language, which was rare for a game published in the late 1990s. Some functions were written in C for interaction with the Windows operating system.
Not Book nor Concept.
I recently read two amazing articles on how reading code is not akin to reading literature and is more like exploration or dissection. I’ve also taken a stab at compiling my own kernel in my temporary Debian distribution I had mentioned in previous posts.
I bring up compiling one’s own kernel because it is not as uncommon in the Linux community as one might think but it is questionable how many actually ‘hack’ at the kernel to see what its code looks like. Keep in mind; I haven’t either.
I have, though, learned that in most code-viewing situations, unless the code is non-trivial and idiomatic, it is generally a definite process to explore the finer workings of a code, let alone get a firm, clear, grasp of what the code does and how it does it. As James Hague put it in the above article…
"I think that’s the only way to truly understand arbitrary source code. To load it up, to experiment, to interactively see how weird cases are handled, then keep expanding that knowledge until it encompasses the entire program.”
And the same is true for games: puzzles in games aren’t just opaque problem solving; sometimes there is a method to the madness, but a lot of times there is a great deal of ‘exploration’ involved. Exploration in games can take two forms; it can exist as a means to literally explore a space and it’s contents as well as figuratively meaning to explore all the possibilities (per self-chosen algorithms the user employs by him/herself) of a given complication.
For a long time I have felt that using walkthroughs is a means of cheating oneself (unless you’re pressed for time and really just love stories) but this wasn’t always true. When I was younger, reading walkthroughs was almost essential for all the games I played; and a lot of times this was just because I wanted to get to the end of the story but a lot of other times it was because I couldn’t figure out for the life of me what to do. I’d go to a walkthrough and see that some item I needed I didn’t pick up, so I’d trek my character back to that place and find that it’d be hidden in a blob of similar color. There were other games where timing and guessing the right actions was a crucial factor in the game, but lead to insane amounts of frustration and little reward (The Black Cauldron, I’m looking at you!)
When trying to solve a codebase that is ‘non-trivial’ (as you’ll see others peg the term, denoting that there’s no money or job at stake) if you do manage to explore and reach an understanding of it, there is a great epiphany that occurs, and this is the sensation patient ‘exploratory’ puzzle solvers quest after. They know a result, and the epiphany that follows, is there, and that they are squandered if they go the cheap route (a la walkthroughs).
But what is a puzzle? For the purposes of this article, I want to define a puzzle as any course of exploration (sequence of choices driven by intent or arbitration) that leads to an epiphany, challenging or not; so if finding a new clue from dialogue options helps us discover a new location and a sense of greater understanding (both of these can be nested under the umbrella that is the term ‘epiphany’), then this dialogue process would be a puzzle.
To add to this; people can only take so much frustration when dealing with our self-defined version of a puzzle. People want to get to that epiphany, but they don’t want to fight tooth-and-nail to do it (unless they’re having fun in the process!) This is something the Half-Life series did amazingly well and I ascribe as it’s leading factor to success.
Exploration doesn’t have to be purely silent, narrative-driven scenes a person meanders through: it can be a more engaging, direct, vivid experience for the user. Exploration, however, doesn’t bode too well with games of the competitive genre, but with a little bit of poetic freedom, our definition of puzzle could be perceived as the epiphanies one gains in competitive matches like chess (or RTS) players noticing great opportunities to be taken advantage of.
A Week (give or take) with Debian.
- Installed Debian wheezy (stable) with a netinstall disc.
- Changed to Jessie (testing), but broke apt-get and my video drivers in the process.
- Updated the kernel to 3.14-1-amd64, thinking this would help: it did not.
- Wrestled with apt-get (with numerous calls to apt-get <install / remove / upgrade / dist-upgrade> -f) and my video drivers (nouveau module vs the nvidia-driver debian package conflicting with one another) and now the system is working and I have tested numerous flavors of windows managers which I’ll touch upon in future posts. Currently I’m using KDE.
At the moment, however, I’ve turned my attention to compiling a Linux kernel and what I thought might be a somewhat simpler experience has really become a larger, several day project. There is an insane amount of options, even in the graphical varieties of configuration (one that uses ncurses and another that uses X11).
Some options can really be stumping for someone who has never hacked a kernel before, but can also be really enlightening if one manages to dig up enough sensible data from the internet. I would venture to say that someone on the beginner to intermediate range of things could compile a kernel on their own, but there are some essential choices that may or may not be default.
When I have it finished and running, I’ll return back with thoughts and considerations.