I’ve been setting up my new Cubietruck over the weekend …
Mr Market has been good to me this year, so I decided to treat myself. I decided that my Raspberry Pi, although good, is rather slow. I’m pleased to say that the Cubieboard is WAY faster. I had in mind to buy a small computer, and an Acer Revo L80 Nettop PC took my fancy. It was towards the upper end of what I was planning to spend. After all, the house already has two computers, and I only intended to get a PC as a cheapo server. In the end, I decided to opt for the Cubietruck rather than the Revo, not really because of price, but because of power consumption. I read that the Revo was rated at 65W. This is small by conventional PC standards, but large by Raspberry Pi standards. To put things into perspective, my Cubietruck is drawing about 5W of power, and that includes a 2.5″ SATA drive. Those ARM chips are really energy-efficient. The running costs for the Cubietruck will miniscule compared to something with an Intel in it. Sure, the raw grunt power will be less; but it all boils down to whether the processing power is sufficient.
Speaking of power, I am in the process of sourcing a power supply for the Cubietruck. Insufficient power can be drawn from USB to power both the board and a drive. At the moment, the board is powered by a USB hub, inputting one set of power through the power barrel, and another through the USB OTG port. When I get a proper power supply, the Cubieboard will be powered purely though its power barrel, and a lot of the parphenalia that you can see keeping it powered will go. Things will be much tidier.
Personally, I see that there’s a big market for a sub-£100 PC that had an ARM processor, SATA drive (not micro SD card), a power adapter, all in a nice enclosure about the size of a book. Ship it with Android preinstalled, and you’d have a nice little consumer device at half the price that you could buy a Nexus 7 for. You wouldn’t get a screen, keyboard, or mouse, of course, but the emphasis would be on cheap. Design it so that it was easy to replace Android with Linux, and not only would it be good for general usage, but it would also attract geeks looking for a cheap eco-computer server.
All this SHOULD be possible. The whole idea seems so logical and feasible to me that I almost can’t see it not happening within the next few years. A fair few compromises would have to be made to get to the sub £100 level, but I think the trade-offs would be worth it to achieve that price point.
Golang is available for RPi’s (Raspberry Pi’s) from the Raspbian repos. Unfortunately, whenever I used it, my programs segfaulted. I compiled a version from Google, and they worked. I tried compiling Golang on my Cubietruck, but ran into memory problems. I’m not sure what’s going on there.
So I’m now abandoning Golang. It was an interesting experiment. It was difficult getting things working in the first place, and setting things up nicely; but I guess that could be due to the fact that I had never used it before. Golang has been called a half-way house between C and Python. That is a good description, and is apt in an extra way that the man who coined it didn’t mean to express … in Python, implementation details are hidden from the coder, and in C, they’re all exposed and raw. Golang seems, to me, to be “half in the box, and half out”. I find it very difficult to wrap my head around whether I’m passing things by pointer/reference/value/whatever.
Consider the implementation of slices, for example. They are structures containing the current length of the slice, the maximum length of the slice, and a contiguous block of data itself. So when you pass a slice, you are passing a copy of that structure. The structure itself is small, consisting of only 3 values – even though the data payload itself may be large. So when you pass it in as an argument, you receive a shallow copy in the callee. Copying is therefore a cheap operation.
The problem is, without knowing these kinds of implementation details, it’s very difficult to know “what I’ve got” from a callee’s viewpoint. Will my modifications stick, am I trashing data, have I got a pointer or a copy, how’s it going to be freed, and soforth? With Python, I never have to worry about memory. With C, I have to worry about it all the time, to the point where I know where everyone has to be told explicitly what to do. With Golang, it’s more of an open question. That’s why I say it’s “half in the box, and half out”.
Python is very good at rapid application development, but I’m growing increasingly dissatisfied with its performance. So I’m re-exploring a C implementation of an accounting package I have. I was able to extend it by getting it to create a set of financial statements from an extended trial balance. It took me about half a day – which is actually “quite good”. So C, it’s not “too” bad to program in. And the programs you have will be blisteringly fast compared to Python. The C program had the ability to interface with lua via swig, but I decided to rip that out, for the following reasons: 1) It made the build process more complicated. 2) It slowed the application down. Whilst lua is very fast compared to other scripting languages, it’s still very slow compared with raw C 3) I don’t think there are any real benefits to it. For one-man projects, you don’t need a scripting language, you already “have” one – C itself. Just get C to do whatever you want it to do.
My application does parsing of data files using flex and bison. I had written the equivalent lexer in Golang. Golang has an interesting way of parsing Go code, but I decided to parse my files in a niave way: state transitions are via goto jumps. Oh, the horrors, I hear you cry. The resulting code turned out to be “not too bad”, and I’m thinking of re-implementing it in C in place of my lex and yacc files. The grammar of my language is simple enough, so I think I can get away with it.
As part of me looking at my C code again, I decided to look at GNU’s autotools. I wish I hadn’t! I have no idea what’s going on here! I started with “autoproject”, and executable that creates a barebones project for you. The number of files and directories it creates is unbelievable: cache directories, m4 macro directories, a whole panoply of stuff. Trying to get configure.ac and Makefile.am to process lex/yacc files was like pushing water up a hill.
Apparently, autotools uses a script called “ylwrap” to generate C code from lex and yacc files. It does this in some kind of unfathomable quirky way. In the end, I thought it was all just a complete mess, and decided to ditch the autotools and just go back to regular Makefiles. The complexity just wasn’t worth it to me. I had sacrificed enough chickens.
This does raise an interesting question: why is the build process of C projects so very very complicated, whilst Golang projects are so simple and quick? I decide that part of the answer is one of “setting policy”. By mandating a directory structure, the Golang compiler eliminates the complexity of knowing where source code is. When everything is specified by convention, there is little in the way of problems. You effectively have a complete source tree. Golang also compiles only Go code, which reduces another layer of complexity.
Autotools can compile anything; so some things are possible using Makefiles that are impossible using Golang’s design concept. So although autotools is a frustrating nightmare, it might ultimately be necessary. I also think that Golang is going to experience its own “dependency hell” at some time – not so much from Golang’s implementers – who after all control the source tree – but from programmers who use Golang, where I foresee plenty of API versioning breakage down the road. I am aware there some developers are coming up with the equivalent of Python’s virtual environments. This seems anti-thetical to the Golang Way. It’s supposed to be simple, and now someone’s coming along to create an extra level of abstraction. I also think that it’s going to be necessary in the end.
HOWEVER, having said that, the way Golang works does suggest that there’s a better way of constructing build environments. Software construction tools should therefore be judged not by their level of sophistication, but by their level of simplicity. That level of simplicity can be achieved though setting of policies.
For example, there could be a simple tool that compiled all C source code in a directory to object code, and put all object code in an archive in that directory. If it had a file called main.c, it could also build an executable. No Makefile required! To a first order of approximation, that’s pretty much how golang does it anyway. The onus is is on the developers to structure their projects in a way that makes sense. I don’t have a problem with that – it just means that programmers should logically order their code into systems and subsystems.
An additonal idea I had would be to copy and idea from the scheme compiler, DrRacket. In the bad old days, module locations were specified separately between the GUI and command-line interfaces. But then they came up with “raco link”, which was a way of specifying where a module was.
There was a similar complexity with Lisp, where projects could be stored anywhere. It was typical to use symbolic links to an ASDF system directory. Too bad for Windows users, which doesn’t have symlinks in a useful way to UNIX. But then QuickLisp came along, and did awaw that nonsense, and store project in a user repo, quitely tucked out of the way. What was formerly very manual and very labour-intensive suddenly became transparent. That’s why I said before that QuickLisp was the best thing to happen to Lisp in a decade.
Anyway, those are my thoughts.
Update 17:27 Another thing that I realised is that the advantage of transitioning to something like I have suggested is that it doesn’t require a shell. How is that an advantage? Well, if you try to compile programs on a platform like MinGW, you’ll often find that configure hangs (at least it did for me on a regular basis when I used it a few years ago). I think that’s because MinGW doesn’t have a proper bash shell. And it always seemed to have weird ideas about where things were kept. I think it had something to do with the differences between the way Windows handles filenames, and the way bash does it. And you thought autotools was cross-platform!