28 Nov 2014, 14:39

Some personal thoughts on the Red Hat acquisition of FeedHenry

The acquisition of FeedHenry by Red Hat isn’t just one the most exciting tech stories in Ireland in 2014, it is also wonderfully satisfying for me as one of only two people in Product Management.

When I found out that Red Hat was negotiating to buy the company, I couldn’t have been happier. Apart from the almost perfect cultural fit, it was very cool to be joining a company whose products I first used in 1997 with Red Hat Linux 4. By 1999, I told my CTO in Integral Design that PCs running Red Hat Linux would wipe his beloved Sun out. To be fair to him, he agreed. A day hasn’t gone by since ‘97 that I haven’t had Linux running somewhere, whether in skunkworks boxes under my desk; using VMware from ‘99; powering my startup from 2006 onwards; or powering FeedHenry from day 1.

The acquisition itself brought a perfect end to an incredible year for me with FeedHenry as director of Product Management. When I joined in July 2013, we had an excellent developer product but one whose capabilities were often hard to find or use. Some development had begun on a complete re-vamp based on the same core technology but hadn’t kicked into high gear yet.

The most important thing for me and our VP of Product and Services, Javier Perez, was to get a really productive relationship cemented between Product, Engineering, Ops and Sales/Pre-Sales. The other critical thing was to get a big customer, our own PS developers, heavily involved in the direction of the Product. Succeeding with this ensured that we didn’t just achieve product-market fit, but we did it in a very effective way.

We continued to release customer-driven new features and upgrades on the existing platform throughout 2013. As the year closed, we got the upcoming Product to a place where Sales could demo it to customers and we began giving ultra-early access to a select few as we moved into 2014. The feedback was phenomenal. Everyone wanted it now! But we continued to refine and improve and dogfood it until we were happy that we had leapfrogged absolutely everyone in the industry. Finally in April we released FeedHenry 3. Shortly afterwards, the acquisition was underway.

We are now a few weeks post-closing and I realise daily just what a great fit the company and the product is with Red Hat and the middleware suite. In fact, a cross-functional team put together a demo for the Devoxx conference in Antwerp 2 weeks ago which showed a real-world integration of FeedHenry with AeroGear and JBoss Fuse. If we can achieve that so quickly, you’d better watch out for what’s coming in 2015!

01 Nov 2014, 16:10

Some thoughts on Twitter rebuilding a developer ecosystem and how they might succeed

First some very detailed background.

The Beginning: I joined Twitter in Feb 2007 so that I could live-Tweet the Irish Blog Awards results via SMS. It was when Twitter finally clicked with me. 2007-2008 was a fantastic time to be on Twitter if you had even a modicum of programming ability. There’s a chance it could be so again.

CorkProbs: Whilst LouderVoice took up 95% of my time around then, the other 5% was thinking about all of the incredible things that Twitter facilitated. I was particularly proud of my CorkProbs app which let you send problems related to roads/water/lights by @‘ing CorkProbs. The idea was that people would send those via SMS so they didn’t need a smartphone. I then took the RSS feed from that account and created a status site using Planet. There were only two problems with it. First, I never contacted Cork County Council to tell them to monitor it and second, there were probably only 20 people in Cork on Twitter at the time!

Mini-Reviews: My other inspired App was @review where you could send a specially structured review Tweet and we would aggregate those reviews into LouderVoice. Again we were trying to do things on Twitter long before critical mass, so it never took off.

Login with Twitter: I stopped the personal experiments when OAuth came in but we added Twitter Login to LouderVoice as soon as it was launched. Of the three identity options used by reviewers on LouderVoice, email is about 99%, Facebook is about 1% and Twitter is about 0.00000001%. In fact, in the entire time the LouderVoice business reviews service has been in operation, I think we’ve had fewer than 20 logins via Twitter.

HushVine: In 2011 we kicked off HushVine which was initially designed to be the ultimate Twitter filtering service. It would show you all the stuff you found interesting and hide all the crap and it would do it in a sharable way (“show me the best of the World Cup” or “hide every tweet about the goddammed World Cup”). We did some really successful widget proofs of concept with The Irish Examiner for the Irish Presidential election and Budget and also did a mobile app for Web Summit. Once we took part in the LaunchPad accelerator program, we narrowed it to TV and got some strong interest from local media organisations. But the Twitter API changes, Tweet-display changes, firehose changes and Twitter’s own interest in TV told me we’d be roadkill, so we put it on ice. I’m currently reading the Twitter book and finally getting some insight into those decisions by Twitter.

Alternatives: So we got to 2014 and I’d done the Jaiku thing, the status.net thing, the Buzz thing, the Google+ thing, the App.net thing and the Twister thing and yet I kept coming back to Twitter as a user. But under no circumstances would I consider building anything on Twitter as I’d just be waiting for the rug to be pulled out from under me without warning (cf Facebook too).

Fabric: And then last month Twitter announced Fabric and I had to take a step back and think about things again.

I’ve had a lot of titles in my career from Engineer to CEO to Director of Product Management, but I’ve realised that even from my first job, I was always a Product Manager. I was never responsible for just “the code”, I almost always had to figure out exactly what it was the customer wanted to ship overall because they rarely knew themselves. Design Services and Professional Services are a great learning environment for this. Even when writing Interrupt Service Routines I was also generating powerpoint decks about the entire product’s features & benefits for others to use. What that has given me is an ability to look at individual Engineering features on their own merit but then set them in the context of the overall product and/or platform.

Which is why I find Fabric extremely interesting. I think this first micro-step in Twitter regaining developer confidence could succeed if they can provide a stable platform that benefits everyone: users, Twitter, advertisers and third party developers.

Platform: And that’s the point, Twitter has always been a platform to me, not a site. It’s a way of quickly getting information from those who have it to those who want it.

By taking a monetisable platform approach (note, not a dumb pipe), we can all benefit and Twitter can undo some of their most controversial decisions from the past, specifically the hard limits on third party client tokens. I completely understand why this was done due to the threat from UberMedia but I also think it has been holding the user growth back hugely.

Most of my Twitter interaction nowadays is via Tweetcaster Pro and Janetter but why couldn’t I have an X-Factor Twitter Client or a One Direction Client? Something that is both specific to a brand but also a full-blown Twitter client with rev-share? It seems nuts to me that I pay the creators of Tweetcaster/Janetter some small one-off fee for their app to access Twitter, none of which goes to Twitter but then there aren’t even ads in those apps. The key to being able to build rich apps is obviously Gnip and the firehose. But rather than Gnip being something a developer pays for, it should be a freely accessible programmable datasource with contextual advertising in-lined and rev-shared.

Ads: As a user, I remain shocked at how bad both Twitter and Facebook are at targeted advertising. I have close to 50,000 tweets and years of Facebook updates but I’ve never once seen an ad on either site that I would click on. I’m pretty clear with my purchase intent on Twitter, heck I even Tweet what KickStarter projects I back. But nothing useful, ever.

Again, this can be solved by a platform approach where third party developers deliver specific demographic groups to Twitter which generates the relevant ads and offers rev-share. TwAdsense. And the thing is, this doesn’t even need logged-in Twitter users, just vertical apps that use the platform to grab information of relevance for their user niches. Whilst Twitter’s MoPub has a classic Ad Network model, it looks like the developer has to do all the work around targeting. It also doesn’t seem that related to Twitter per-se, just mobile in general.

TV: Then Twitter doesn’t have to worry about “doing TV” or “doing sports”, it just has to provide the platform base to allow others to do that. In the specific case of TV, the on-screen integrations have become ridiculous. Two examples - MythBusters where Twitter was taking up 13 of the screen and Big Driver where we were encouraged to Tweet along, whilst a woman was being sexually assaulted in the TV show. Twitter is a second screen experience and trying to dominate the first screen is deeply intrusive, like the worst kind of product placement.

Mobile: Fabric’s focus on mobile is absolutely the right thing to do. I love Digits for its beautiful simplicity. As I said on Twitter when it launched:

But, Dave Winer is right, all the Fabric services absolutely have to get a JS implementation. Not just for Cordova Apps but also to reinvigorate Twitter web-app development. Like Dave, I’m a huge fan of Dropbox’s JS approach. In fact I’ve recently wondered if you could build a Twitter clone for your friends where Dropbox is that platform :-)

Old Media: Why shouldn’t a person’s entire Twitter interaction happen on the New York Times site? Not just using the old branded Twitter widgets but an entirely custom NYT UI where, yet again, it’s based around Ad rev-share. This is the complete opposite of Facebook where they want everything to happen on FB. Twitter should be old media’s best friend against Facebook, giving them new revenue streams and new ways to get people on their site.

Direct Messages: Seriously Twitter, Digits + Direct Messages = WhatsApp. It’s time to make DMs a first class citizen in the platform. You want stratospheric user growth? DMs! My 72 year old mother has just joined WhatsApp on her Nokia S40 phone so she can IM from Kilkenny in Ireland to her grandson in New York. He’s on Twitter. She could be on Twitter with Digits+DMs.

Lists+Filters: I know I know. Lists never get much use on any site, including Twitter and Facebook. Except, I couldn’t deal with either site without them. Lists on Facebook get rid of the goddammed algorithmic filtering and my “Mobile” List on Twitter is where I spend most of my time reading.

A combination of shareable Powerful Lists+Filters would make Twitter infinitely more accessible to a much bigger audience. What are you interested in? Soccer? What Team? Liverpool? Bam - a pre-canned filtered Liverpool FC bundle ready to go, that may have been crowd-sourced, not created by Twitter. Forget the concept of following people for users like this, they should follow filtered lists. This was basically the HushVine concept and it’s still valid. And can you think of a more perfectly tuned advertising audience? One based on interests.

Aside, Power-user rant: Speaking of algorithms, unless Twitter has invented Skynet in the past few months, their chances of showing me tweets I want to see are zero. They still think the person most suitable for me to follow is Barack Obama. Nice guy I’m sure, no interest in following him on Twitter. If they add algorithmic insertion of tweets, it has to be optional or they’ve just become Facebook. But the reason we love Twitter is that it isn’t Facebook.

Useful Big Data: Of course 2014 is all about the IoT hawtness but I’ve always thought that Twitter should be the conduit for useful IoT info. I wired the Bandon Flood Warning system into Twitter years ago. Now it is done manually but it still has quite a few locals following it for those alerts. I’ve always been amazed that Twitter or others haven’t built big-data analysis of the geo-movement of storm/hurricane tweets and then layered useful commercial information on top. So many useful tweets crying out to be Hadooped, used and monetised. Watching popular hashtags or search terms in the early days was a joy. Trying to do it during any major event in recent years has been ridiculous. I end up back watching human-curated stuff elsewhere. Twitter was supposed to replaced that with smarts. And if it can’t, let others do it without putting roadblocks in their way.

Win, Win, Win, Win: So in summary, yes, Twitter can re-build the developer ecosystem but it’s not going to be easy and there has to be a very clear immediate revenue-based ROI for developers to start using the Fabric tools and whatever else is coming next. Even more than trust, what Twitter has to offer is guaranteed stability/continuity. No more rug-pulling, no more sun-setting, no more competing with developers. There is absolutely no reason why Twitter can’t be bigger than Facebook, but they’ll need developers to get them there.

30 Oct 2014, 09:24

As promised, yet more tech goodie backlog for your entertainment

This time it’s more about software than hardware. But first, hardware.

  • Lightning: We had a lightning strike near our home a few weeks back and my main Windows desktop has been acting dodgy ever since. I swapped the PSU but that didn’t help and finally came to the conclusion that some of the USB ports were damaged on the motherboard. One replacement Gigabyte 970A-DS3P later and all seems to be well. I also replaced an older hard-disk that sounded like a machine-gun since day 1 with a combo one that has a small amount of SSD. The silence is a joy.

  • Node.js: I built a small mini stats portal for a side-project in Node.js with a bit of Bootstrap, Google Charts and Passport. I still believe that a lot of Node’s power comes from the ecosystem and community. There are just so many good tutorials and sample applications out there that you can always find a good starting point. For example this one. This also means that you have the combined knowledge of a ton of people who have sorted out nasties like CSRF, CORS, Auth, etc. already. Everything I’ve done has been based on Express and MongoDB but HAPI is starting to get that community energy too. My first failed attempt at that portal was in Go, which leads nicely to….

  • Go: I’ve also been playing around with Go (#golang) which is a very interesting language built by Robert Griesemer, Rob Pike, Ken Thompson and others in Google. Anyone who knows C will feel very comfortable with it. It’s like C with all the badness removed. I took to it like a duck to water and found it extremely easy to learn and become productive with. It’s designed as a language for server apps and my first real app was a re-write of a database maintenance tool that started in Python, moved to Node.js and is now running perfectly in Go. The whole workflow around Go is a real strength and I can see why it’s gaining a lot of traction in the past 24 months with companies like Bitly, Docker, Apcera etc. Its two main weaknesses are not language related. First is the small size of the community - you cannot “program via Stack Overflow” the way you can with Node, there just isn’t the volume of information out there. Second is the state of Go web frameworks - there seems to be a strong anti-framework mindset in the community which thinks everyone should build from first principles. That’s fine for full-time hardcore developers but it puts off tinkerers like me who find frameworks a nice entry point into a language. So whilst full-stack frameworks like Beego have great potential (as do smaller ones like Revel etc), the number of developers is tiny and once again you are back to the community-size and availability of learning-material problem. So for the moment I’m going to focus on using it for tools, utilities and some fun stuff on the Raspberry Pi. Actually, the fact that it works seamlessly across Windows, OSX, Linux and generates static binaries for them is one of the other reasons I like it so much. If you have an hour, watch Rob Pike’s wonderful keynote at GopherCon in April where he deconstructs hello world.

  • Re-factoring: A major mistake I made back in 2008 was taking a short-cut and telling a developer to build a start-up’s Admin back-end on top of Django Admin. Initially it made lots of sense when most of the functionality was just interacting with the models, but as soon as you try to do anything new, you realise you have to do it as a bloody mishmash of hacks, workarounds and copy/pasted code. In a way I’m contradicting the previous point I made about Go but actually the error was more about not biting the bullet and re-implementing the Admin from scratch in the early days, once the initial prototype had done its job. The question now is whether it would be worth someone’s time, even as a multi-weekend/Christmas learning exercise, to re-do in Node and Express. The system actually started as a Turbogears application so this wouldn’t be the first time to make such a move!

  • Win8.1: Switched from Windows 7 to Windows 8.1. It’s still getting in the way and slowing me down, but less so than Win 8.0. The driver-signing nonsense is still infuriating, as is the method of turning that off. However I did learn the WinKey-X short-cut which gives you quick access to everything you need like Control Panel, Command Prompt and Shut-down menu.

  • Harp.js: This static blog you are reading is hosted on GitHub Pages and generated by Harp.js. I’m still a big fan of Harp and I’m thrilled it now works end-to-end on Windows. Just install Node.js and do npm install -g harp. I used to have a plethora of WordPress sites and blogs but after the relentless hacks of 20132014, I’m done with it. I have two sites left to port over to being based on Harp.js and then it’s buh-bye WordPress/PHP/MySQL/Insert-name-of-security-hole-riddled-plugin-here. We all moved to WordPress from Blogger/MT/etc for its ease of use and quick install but I think the move away will become a flood as people grow tired of the upgrade treadmill. The next major version of WordPress should be about security not just of the core but of every plugin too.

  • Inbox: Got an invite to Google Inbox. Once signed up, I spent my entire time worried I was missing an email or that I’d accidentally archived it or trying to find the “Mark as Unread” button in Chrome. Sorry Google, but my first 5 minutes on GMail were a complete joy back in the day, whilst Inbox was an exercise is “what’s the actual point here, apart from being different?”.

  • IRC: It’s huge in Red Hat. I really love using it. Could it have a resurgence or will it go the way of Usenet? I’m using HexChat on Windows which is rather good. I think the mIRC UI is what kept me away for all those years.

  • Linux: I’m moving from Ubuntu/Lubuntu/Mint in all my VirtualBox VMs to CentOS 7 and Fedora 20. Apart from yum instead of apt-get, there really isn’t that much difference for normal day-to-day stuff. I can install all the usual tools and work as I did previously. The lack of window minimise/maximise/close buttons on Fedora is completely ridiculous tho.

  • Docker: Cos, SEO :-) But srsly, I’ve been kicking off some Docker instances of Mongo and Node apps inside a VBox VM and liking it a lot. Setting up the inter-instance networking is a bit of a pain tho. One thing that bothers me is the initial promise of tiny images seems to have fallen by the wayside. You really should avoid playing with it on home broadband if your base OS is different to the base OS used to build the image you want to play with. Also be very specific about what version of something you want or it’ll download all available versions.

  • OpenShift: Doing a bit of playing with Red Hat’s OpenShift PaaS. So far so good but I haven’t gone much further than Node.js hello world.

  • GoRead: Discovered a brilliant RSS reader a few months ago called GoRead that looks and acts a lot like good old Google Reader. This one is written in Go and you have to self-host. But I was thrilled to find that I can host it on Google App Engine on the free tier. I’ve stopped using all the centralised readers like Feedly since.

  • Bitbucket: I don’t think many people realise that BitBucket has most of the same features as GitHub but provides private repos for free. Its only real flaws are the lack of a GitHub Pages equivalent and the lack of 2 Factor Authentication (which is really unforgivable at this stage.)

  • Pinboard: I finally dumped Delicious a few months back and paid the tiny few quid to get a Pinboard account. Not only is it a great bookmarking service that just works (and works quickly), the guy who runs it has one of the best Twitter accounts out there. It was able to import all of my Delicious bookmarks and the IFTTT and Twitter integrations means that all my tweeted links, re-tweets and favourites automatically get saved as bookmarks.

  • Janetter: It’s still the best Tweetdeck-multi-column-style Twitter client on Windows and OSX (but not brilliant on Android). The word/user filters alone would cause me to pay many many Euro for it but the regex filters make it invaluable.

More tomorrow.

29 Oct 2014, 11:21

Months behind in my blogging. Here are some tech things for your enjoyment that have caught my eye recently

Once you get out of the habit of blogging it can take ages to get back in the zone. A week’s vacation this week should do the trick for me. This is partially a link dump and partially the subjects of a bunch of upcoming posts.

  • Red Hat acquired FeedHenry. I first installed Linux in 2006 when working in San Jose and was a Red Hat user all through those early years and into the initial Fedora years. I launched a start-up on 2 CentOS servers. After a period in the Ubuntu wilderness, it feels like I’m coming home :-)

  • Bought a Printrbot Simple - One of the cheapest 3d printers you can buy. Took me a lazy weekend to assemble. Still freaks me out that I can print plastic. Not using it as much as I’d expected as I need to up-skill in CAD first but love being able to print STL files off the web on a whim. Print quality not amazing but for €400, I can’t complain. Lots more posts coming on this. I’ll probably be upgrading from the fishing line to proper belts and pulleys soon.

  • Bought a DSO Nano oscilloscope - When I walked out of UCD in 1992 I swore I’d never touch an oscilloscope again as long as I lived. Far too many afternoons spent doing elec labs with those horrible unusable things. Turns out they are dead handy when you can’t figure something out from software! Have decided to sit down and learn how to use it properly.

  • Bought a Bus Pirate - This is a cool mini “logic analyser” that understands many common protocols like I2C, MIDI, SPI. I seem to spend inordinate amounts of time getting I2C and SPI working on my projects. This should finally help me figure out the nRF24L01 issues.

  • Have been getting frustrated with most Arduinos being 5V whilst most modules nowadays are 3.3V. Level shifters are a pain. So I got some 3.3V 8MHz Pro Minis to deal with this. Two days of grief trying to get them working with one module before I finally figured out that they cannot do 115200 serial comms reliably. Grr.

  • The ESP8266 is a stunning piece of kit. It’s a full Wi-Fi module and TCPIP stack on a tiny board for only $4.50. It works over a serial connection and can Wifi-enable the most basic of Arduinos. It has been a bit of a pain to get going with it since there are tons of conflicting blogposts out there about them. I’m sure the stability will improve over time as the Open Source community focuses a lot of their attention on them. One important note is that they use a lot of power and you’ll struggle to power them from a pin on an Arduino. This also means that long-term battery-powered operation is a no-no.

  • The Google Cardboard clones I’ve been getting from China are head-wreckers. It should not be this easy to give people a sense of Oculus Rift style VR. Sure they don’t compare with Oculus but it’s one of the first times I’ve seen the kids blown away by a piece of tech. They just got it! For the sake of a tenner, you’d be silly not to try it out. The Versailles demo and Google Earth are amazing. The simple roller-coaster is less so but you need to be sitting down!

  • The Plug-up security key for 2 Factor Authentication is interesting. It’s a cheap USB key that implements the FIDO U2F standard and gives you another way of authenticating for GMail (on Chrome only so far). I don’t know if I’ll keep using it but it’s a damn sight quicker to use that the bloody Authenticator app on your phone.

  • I finally got a proper soldering station. From 1984-2012 I used the one from my youth. Then two years with a crappy Draper iron from a local hardware shop with some usage of the €10 Lidl one. This Aoyue Int937 is a good quality Chinese clone of an older Hakko design. So far I’m finding it extremely good. Even the reduction in waiting time from 5mins+ to 30 secs for it to heat up makes it worth it. German supplier sent to Ireland for €65 incl delivery.

  • eFibre is now available in Old Chapel, Bandon. You have no idea how big a deal this is for me. We’ve been stuck on 6Mbs/384kbs for what feels like a decade. Things like backups, Dropbox, Webinars, Netflix etc etc have been torture. All our kids are intimately familiar with “buffering”. Given the closeness of the VDSL cabinet, we’re hoping for the full 90Mbs. Fingers crossed.

  • The Red Bear BLE Nano Kit is a superb way of getting into BLE, iBeacons etc. I’m playing around with one for the Google Physical Web initiative. The idea is that every beacon just advertises a URL instead of some Apple-mandated/or-random UUID. So simple but actually quite profound. I got the kit from Exp Tech in Germany and they seem to be good value for electronics in general. €7 postage to Ireland for most things with advantage of Eurozone and speed of delivery.

  • Speaking of the Red Bear kit, this is also my first experience of the ARM mbed online development studio. Wow, love it! It generates the binary and you download in the browser but the really impressive bit is that you drop the binary on the pseudo memory stick that the module presents to Windows and it auto-installs and runs the binary for you. So so so much better and slicker than the poxy USB-Serial messing on Arduino.

  • I haven’t been able to get the cheap HM-10 BLE module to work with Google Physical Web. The Chinese manufacturer sent me a one line “just do X” without explaining how, in response to my query. So I may have to give up on it and only use it as an iBeacon.

  • Halloween is coming on Friday. Projects under-way. Less ambitious than before but should have some pics up on Saturday :-)

  • I was disappointed that my 13 year old Mondeo uses a version of CAN bus that is incompatible with the fun OBD-II bluetooth modules that you can get for half-nothing online. If yours is newer, you can plug it in just under your right knee. However it worked beautifully on my wife’s 10 year old Zafira. You have to remove the panel underneath the handbrake. The app you use on your Android phone is called Torque. It’s really impressive to see all of your engine stats appearing on your phone. I’d love to see us return to the old days where everyone was able to fix their own cars. I used to do a lot of the maintenance on my old Citroens (because, ye know, they’re Citroens) but never touched the Mondeo until recently. I’ve replaced both the coil pack and all the brake-pads myself but I’d love to have been able to use the OBD-II to analyse the engine.

  • The power of APIs - Just had a request this morning from someone who wants to make use of an API I launched in 2008. It had some updates in 2009 but hasn’t been changed since then. It still works perfectly and is at the heart of a bunch of features and tooling/automation. I’ve been saying it for years but you should do everything API-first. You should read API Evangelist regularly, he has this stuff nailed.

OK, I’ll probably do another batch of these tomorrow. For the moment, it’s time to tie together a bunch of things I’ve been playing with since 2012.

09 Sep 2014, 06:45

Hi I'm Batman. Check out my motorised Lego helicopter.

I have long bemoaned Lego’s approach to adding electronics/power to their kits. Technics seems to be its own little sub-section and Mindstorms is a crazy waste of money in an Arduino and Raspberry Pi world. Now that they are the biggest toy company globally, I’d love to see how either of those product ranges are adding to the bottom line.

In that spirit I was very excited about the possibilities of the Lego Batman kit I got in Hamley’s recently. After beating back the sales people who tried to sell me blue or white Batmen (are they completely insane?), I finally found a proper black one from the movie.

First Time

On Saturday, I started assembly with my two daughters and a small bit of help/interest from one of the boys. When I say “I started assembly”, I really mean I handed the pieces to the girls who then clicked them into place with zero delay and almost no reference to the instructions. Their spatial reasoning seems to be at 1000x the speed of mine.

We were thrilled with the result but I have bigger plans for this helicopter. Phase one was the motor for the rotor. I have a bunch of a rumble motors from broken original XBOX controllers and it was a trivial task to hotglue on a piece of metal to connect it to the rotor.

I tried to zip tie the motor to the helicopter but then the general solution dawned on me: I should be hotgluing these add-ons to pieces of lego and then it becomes a simple exercise to add/remove them from kits, depending on the mood you are in.

This worked like a charm and all of the kids were suitably impressed. Here it is in action. Watch those fingers!

We’ll switch to a LiPo battery and smaller switch and wiring later in the week. The kids have also suggested the next set of improvements which we’ll do next weekend. Yes of course it involves LEDs.

05 Jul 2014, 12:02

The first genuine leaked pictures of the Apple iWatch will amaze you

By bringing together gurus of branding and design, Apple has changed everything for everyone, yet again.

iWatch 1.0

30 Jun 2014, 18:34

I'm going to miss all those great times on Orkut.

All two of them. In 2007. 7 years and 2 days ago.

Orkut

30 Jun 2014, 08:27

Kimono is a dead-easy zero-code way of building APIs from scraped web-pages

Back in 2011, I created a simple scraper in Python to take the river level reported by the Bandon Flood Early Warning System every 15 minutes and save it in Google Fusion Tables. In 2012, I extended it to also save the data on Pachube/Cosm/Xively/Fleeglrheumazoid (or whatever insane name they have this week). So you have 2.5 years of river data in tabular and graphic form.

The code itself is very simple and just involved walking through the page (actually a bloody iframe!) to get the element I needed. But it’s brittle, single use and runs on a home server behind the firewall. I also have a gap in the data in 2012 where I didn’t notice that the cronjob had screwed up for 2 weeks.

Last week I heard about Kimono and really liked the sound of it. You basically point it at any web-page and it gives you a version of the page where you can select elements and get API end-points for them. So any site that has info you’d like to use or open up but where there is no existing API, can be API-ified. You need no coding knowledge to do this.

Kimono

This has huge potential to generate even more Irish OpenData without relying on developers to do the work. Given how little data has been opened up here, either officially or by OpenData enthusiasts, this can only be a good thing.

If you’d like to access the Bandon River level programatically now, it is here as JSON, CSV and RSS.

As an exercise for the reader, why not try the truly hilarious Cork County Council Planning Enquiry System which works best with Internet Explorer 5.5 or newer :-) Of course, the “accept terms and conditions button” does not work with Chrome. It looks like the same web gurus who designed Bandon FEWS also did this site, since both use 1990s-style frames, which Kimono does not handle at all. I just spent 2 mins playing with it and picked out the central frame. But Kimono-ifying it gives a session timeout error by the CorkCoCo site. I think a small bit of effort could get this working. Any takers or do you all prefer to just talk about OpenData?

23 Jun 2014, 08:05

If you wondered whether 3D Printing would ever go mainstream, here's the proof.

I already have a request in from her sister to do her name with each character a different colour. Time to figure out how to use 123D properly.

3D Names

23 Jun 2014, 07:27

A simple Node.js script to setup a new blog post in Harp.js

One small annoyance with the Baseline bolierplate in harp.js is that every blogpost needs to be listed in a file called _data.json with all the relevant metadata. Whilst it’s not a huge job to slug-ify a title and add the ID and epoch time, it’s sufficiently annoying to add friction to me blogging more. Here’s the entry for this blogpost:


  "a-simple-nodejs-script-to-setup-a-new-blog-post-in-harpjs": {
    "ID": 1291,
    "author": "admin",
    "date": 1403504847928,
    "ptype": "post",
    "description": "This little script generates the relevant metadata in the _data.json file including epoch time and optional Facebook thumbnail URL if you are using the Baseline boilerplate.",
    "slug": "a-simple-nodejs-script-to-setup-a-new-blog-post-in-harpjs",
    "status": "publish",
    "title": "A simple Node.js script to setup a new blog post in Harp.js",
    "FBImage": "https://s3-eu-west-1.amazonaws.com/conoroneill.net/wp-content/uploads/2014/06/newpost.jpg"
  }

This little script just generates everything you need once you tell it a title, description and image “thumbnail”. That latter piece is something I added to the _data.json “schema” so that when my posts are auto-shared to Facebook by dlvr.it, it’s not the same image there every time but one of my chosing.


// newpost.js
// Create all the metadata for a new Harp blogpost
// node newpost.js
// Uses and updates the _data.json file in your _harp directory
// It prompts you for the post-title, post-description and URL of an image that will appear when syndicated to Facebook

// Copyright (C) 2014 Conor O'Neill
// Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
// The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

var slug = require('slug');
var fs = require('fs');
var file = '../_data.json';
var config;

fs.readFile(file, 'utf8', function (err, data) {
  if (err) {
    console.log('Error: ' + err);
    return;
  }
  config = JSON.parse(data);

  var readline = require('readline');
  var rl = readline.createInterface({
    input: process.stdin,
    output: process.stdout
  });

  rl.question("Title: ", function(title) {
    rl.question("Description: ", function(description) {
      rl.question("Facebook Thumbnail Image URL: ", function(fbImage) {
        var author = "admin";
        var ptype = "post";
        var status = "publish";

        // slug module doesn't replace periods which causes Harp to choke when serving up static file
        var slugTitle = slug(title).toLowerCase().split('.').join("");
        var publishDate = (new Date).getTime();

        var maxProp = "ID";
        var maxVal = 0;

        for (post in config) {
            var value = parseInt(config[post][maxProp], 10);
            if (value > maxVal) {
                maxVal = value;
            }
        }

        var id = maxVal + 1;

        var newPost = {"ID": id, "author": author, "date": publishDate, "ptype": ptype, "description": description, "slug": slugTitle, "status": status, "title": title, "FBImage": fbImage};

        config[slugTitle] = newPost;

        fs.writeFile(file, JSON.stringify(config, null,  2), function(err) {
          if(err) {
            console.log(err);
          } else {
          fs.writeFile("../"+slugTitle+".md", description, function(err) {
            if(err) {
              console.log(err);
            } else {
              console.log("The new post was created as ../"+slugTitle+".md");
            }
          }); 
          console.log("_data.json was updated");
          }
        }); 
        rl.close();
      });     
    });
  });
});


Sourcecode is on GitHub as always.

One interesting side-note was how much effort it was to find a Node module that asks for user input in the way I needed. There are lots of modules out there for Q&A and some of them make it very easy to ask lots of questions, but not one of them (not one!) is able to deal with input that stretches over more than the terminal width. All of them start overwriting what you write on the screen instead of scrolling up. That’s a pretty staggering oversight by a lot of developers. Eventually I found the classic readline library which I have used in many languages over the years and it worked like a charm.