I’ve just returned from a 4-day trip to Amsterdam with my girlfriend Lesley. It was the first time I’ve been there, and it definitely lives up to its reputation as a purveyor of all sorts of temptation, but Lesley and I were really there for the beer and food. We ate falafel and flemish-style chips (with mayonnaise and curry sauce) in the street, ate in Indonesian (sensational Nasi Goreng), Argentinian (steaks), Chinese and Japanese restaurants, had freshly-squeezed orange juice for breakfast every day, sampled an authentic Dutch apple pie, and it was all amazing, and reminds me of why Scotland is such an impoverished nation when it comes to food. We weren’t even able to escape Scottish misery because as we were eating chips on the Dam square, a bagpipe player in full regalia started playing for the crowds of tourists. Inexplicable, really. However, the weather was definitely not Scottish, with blue skies for the whole time we were there.
Last night there was a documentary on Channel 4 (one of the main broadcasters on UK television), that essentially asked the question above. It was shocking and startling. There seems to be a very good chance that the answer to the question is no. Many atmospheric, oceanographic and biological scientists were interviewed, and they were not from the scientific fringe, they were from mainstream academia. All are of the opinion that the widely-held belief (for that is what it seems to be, a belief) that CO2 emissions from human activity cause global warming, is based upon bad science, and after viewing this program I agree with them. There are other theories about what causes global warming, most notably solar radiation, and for which the scientific evidence is more compelling than that for CO2 emissions.
For the last seven years of my professional life, one issue has dominated above all others, and that is metadata. Metadata is a simple notion really, that of describing things in a summarised fashion so that they can discovered by searching catalogues and then used in a practical way. A library book index is an example of metadata, as is a telephone directory.
Following on from my post ‘Why I love Google Maps‘, the data from the Google Maps service is also very commonly used to create ‘mashups‘. This is becoming a fashionable term for commentators to bandy about when talking about interesting new websites, in much the same way as the label ‘web 2.0′. Unlike ‘web 2.0 ‘ however, the term mashups has a strict definition, involving combining data from disparate, unconnected sources on the internet, into an original format that adds value in the way that the data is combined and presented (graphically in the case of Google Maps), is something that is new, and is perhaps something that the original generators of the data did not envisage it being used for. Having data that is freely available through internet-based interfaces that are adequately defined is of course a prerequisite for this.
Last weekend I spent a couple of days in Cheltenham visiting an old school friend, John. I had a great tour of most of the pubs and clubs and can confirm that yes, the place is small and peaceful, and just a bit upmarket (maybe too much for an unsophisticate like me), whilst at the same time having good nightlife. It’s also a bit of a shock to spend the evening in smoky pubs – I have got used to Scottish pubs and waking up the day after not reeking of tobacco. It’s all change for England too in the summer though.
One of the issues a software engineer who develops HTTP interfaces (i.e. websites) as part of their code has to consider is ‘accessibility’. This catch-all term covers many things but essentially means that a website must be implemented in such a way that no-one is excluded from using it. It’s often thought of as purely a website graphical design consideration but it is absolutely something that a coder has to consider.
In the organisation where I am employed there is a dual, almost schizophrenic, nature to the work that I (and the other software engineers in my team) do there. Our funding comes from several sources, but a large portion comes from academic research councils. The nature of this funding is that it involves short-term projects that focus on research materials and on developing internet technologies to deliver those resources to the academic community in the UK in novel and groundbreaking ways. To do this work requires a specific type of software engineer, and one who is happy to take on new challenges where there is little supporting documentation (because very often no-one else in the whole world is doing quite the same thing), a very small but knowledgeable audience of users (typically research academics and information specialists like librarians and archivists), and where the limitations of the funding require that all the software development (and sometimes the project mangement too) are done by exactly one person, perhaps on a part-time basis.
I go walking a lot in various parts of Scotland, especially the more mountainous and hilly bits, and I have a secret about this that that I don’t tell many people; I’m fascinated by aircraft crash wreckage that you find surprisingly often in these sorts of places. I’ve written in detail about this here.
There is a well-known ratio in engineering; 80:20. This ratio crops up in software engineering too and refers to several rules-of-thumb that sound a bit flippant but in my experience are valuable real-world guides when managing software development projects. This is also called the ‘Pareto Principle‘.