countculture

Open data and all that

Posts Tagged ‘finance

Open Data: A threat or saviour for democracy?

with 2 comments

This is my presentation to the superb OKCON2011 conference in Berlin last week. It’s obviously openly licensed (CC-BY), so feel free to distribute widely. Comments also welcome.

Written by countculture

July 4, 2011 at 10:30 am

When Washington DC took a step back from open data & transparency

leave a comment »

When the amazing Emer Coleman first approached me a year and a half to get feedback on the plans for the London datastore,  I told her that the gold standard for such datastores was that run by the District of Columbia, in the US. It wasn’t just the breadth of the data; it was that DC seemed to have integrated the principles of open data right into its very DNA.

And we had this commitment in mind that when we were thinking which were the US jurisdictions we’d scrape first for OpenCorporates, whose simple (but huge) goal is to build an open global database of every registered company in the world.

While there were no doubt many things that the DC company registry could be criticised for (perhaps it was difficult for the IT department to manage, or problematic for the company registry staff), for the visitors who wanted to get the information it worked pretty well.

What do I mean by worked well? Despite or perhaps because it was quite basic, it meant you could use any browser (or screenreader, for those with accessibility issues) to search for a company and to get the information about it.

It also had a simple, plain structure, with permanent URLs for each company, meaning search engines could easily find the data, so that if you search for a company name on Google there’s a pretty good chance you’ll get a link to the right page. This also means other websites can ‘deep-link’ to the specific company, and that links could be shared by people, in social networking, emails, whatever.

Finally, it meant that it was easy to get the information out of the register, by browsing or by scraping (we even used the scraper we wrote on ScraperWiki as an example of how to scrape a straightforward company register as part of our innovative bounty program).

It was, for the most part, what a public register should be, with the exception of providing a daily dump of the data under an open licence.

So it was a surprise a couple of weeks ago to find that they had redone the website, and taken a massive step back, essentially closing the site down to half the users of the web, and to those search engines and scrapers that wanted to get the information in order to make it more widely available.

In short it went from being pretty much open, to downright closed. How did they do this? First they introduced a registration system. Now, admittedly, it’s a pretty simple registration process, and doesn’t require you to submit any personal details. I registered as ‘Bob’ with a password of ‘password’ just fine. But as well as adding friction to the user experience, it also puts everything behind the signup out of the reach of search engines. Really dumb. Here’s the Google search you get now (a few weeks ago there were hundreds of thousands of results):

The other key point about adding a registration system is that the sole reason is to be able to restrict access to certain users. Let me repeat that, because it goes to the heart of the issue about openness and transparency, and why this is a step back from both by the District of Columbia: it allows them to say who can and can’t see the information.

If open data and transparency is about anything, it’s about giving equal access to information no matter who you are.

The second thing they did was build a site that doesn’t work for those who don’t use Internet Explorer 7 and above, including those who used screenreaders. That’s right. In the year 2011, when even Microsoft are embracing web standards, they decided to ditch them, and with them nearly half the web’s users, and all those who used screenreaders (Is this even allowed? Not covered by Americans With Disabilities Act?).

In the past couple of weeks, I’ve been in an email dialogue with the people in the District of Columbia behind the site, to try to get to the bottom of this, and the bottom seems to be, that the accessibility of the site, the ability for search engines to index it, and for people to reuse the data isn’t a priority.

In particular it isn’t a priority compared with satisfying the needs of their ‘customers’, meaning those companies that file their information (and perhaps more subtly those companies whose business models depend on the data being closed). Apparently some of the companies had complained that they were being listed, contacted and or solicited without their approval.

That’s right, the companies on the public register were complaining that their details were public. Presumably they’d really rather nobody had this information. We’re talking about companies here, remember, who are supposed to thrive or fail in the brutal world of the free market, not vulnerable individuals.

It’s worth mentioning here that this tendency to think that the stakeholders (hate that word) are those you deal with day-to-day is a pervasive problem in government in all countries, and is one of the reasons why they are failing to benefit from open data the way they should and failing too to retool and restructure for the modern world.

Sure, we can work around these restrictions and probably figure out a way to scrape the data, but it’s a sad day to see one of the pioneers of openness and transparency take such a regressive step. What’s next? Will the DC datastore take down its list of business licence holders, or maybe the DC purchase order data, all of which could be used for making unsoliticited requests to these oversensitive and easily upset businesses?

p.s. Apparently this change was in response to an audit report, which I’ve asked for a copy of but which hasn’t yet been sent to me. Any sleuthing or FOI requests gratefully received.

p.p.s. I also understand there’s also new DC legislation that’s been recently been passed that require further changes to the website, although again the details weren’t given to me, and I haven’t had time to search the DC website for them

Written by countculture

June 7, 2011 at 1:39 pm

George Osborne’s open data moment: it’s the Treasury, hell yeah

with 2 comments

As a bit of an outsider, reading the government’s pronouncements on open data feels rather like reading official Kremlin statements during the Cold War. Sometimes it’s not what they’re saying, it’s who’s saying it that’s important.

And so it is, I think, with George Osborne’s speech yesterday morning at Google Zeitgeist, at which he stated, “Our ambition is to become the world leader in open data, and accelerate the accountability revolution that the internet age has unleashed“, and “The benefits are immense. Not just in terms of spotting waste and driving down costs, although that consequence of spending transparency is already being felt across the public sector. No, if anything, the social and economic benefits of open data are even greater.

This is strong, and good stuff, and that it comes from Osborne, who’s not previously taken a high profile position on open data and open government, leaving that variously to the Cabinet Office Minister, Francis Maude, Nick Clegg & even David Cameron himself.

It’s also intriguing that it comes in the apparent burying of the Public Data Corporation, which got just a holding statement in the budget, and no mention at all in Osborne’s speech.

But more than that it shows the Treasury taking a serious interest for the first time, and that’s both to be welcomed, and feared. Welcomed, because with open data you’re talking about sacrificing the narrow interests of small short-term fiefdoms (e.g. some of the Trading Funds in the Shareholder Executive) for the wider interest; you’re also talking about building the essential foundations for the 21st century. And both of these require muscle and money.

It also overseas a number of datasets which have hitherto been very much closed data, particularly the financial data overseen by the Financial Services Authority, the Bank of England and even perhaps some HMRC data, and I’ve started the ball rolling by scraping the FSA’s Register of Mutuals, which we’ve just imported into OpenCorporates, and tying these to the associated entries in the UK Register of Companies.

Feared, because the Treasury is not known for taking prisoners, still less working with the community. And the fear is that rather than leverage the potential that open data allows for a multitude of  small distributed projects (many of which will necessarily and desirably fail), rather than use the wealth of expertise the UK has built up in open data, they will go for big, highly centralised projects.

I have no doubt, the good intentions are there, but let’s hope they don’t do a Team America here (and this isn’t meant as a back-handed reference to Beth Noveck, who I have a huge amount of respect for, and who’s been recruited by Osborne), and destroy the very thing they’re trying to save.

Written by countculture

May 17, 2011 at 2:27 pm

Not the way to build a Big Society: part1 NESTA

with 9 comments

I took a very frustrating phone call earlier today from NESTA, an organisation I’ve not had any dealings with it before, and don’t actually have a view about it, or at least didn’t.

It followed from an email I’d received a couple of days earlier, which read:

I am contacting you about a project NESTA  are currently working on in partnership with the Big Society Network called Your Local Budget.

Working with 10 pioneer local authorities, we are looking at how you can use participatory budgeting to develop new ways to give people a say in how mainstream local budgets are spent. Alongside this we will also be developing an online platform that enables members of the public to understand and scrutinise their local authority’s spending, and connect with each other to generate ideas for delivering better value for money in public spending.

We would like to share our thinking and get your thoughts on the online tool to get a sense of what is needed and where we can add value. You are invited to a round table discussion on Friday 19 November, 11am – 12.30pm at NESTA that will be chaired by Philip Colligan, Executive Director of the Public Services Lab. Following the meeting we intend to issue an invitation to tender for the online tool.

Apart from the short notice & terrible timing (it clashes with the Open Government Data Camp, to which you’d hope most of the people involved would be going), the main question I had was this:

Why?

I got the phone call because I couldn’t make the round table, and for some feedback, and this was the feedback I gave: I don’t understand why this is being done. At all.

Putting aside the participatory budgeting part (although this problem seems to be getting dealt with by Redbridge council and YouGov, whose solution is apparently being offered to all councils), there’s the question of the “online platform that enables members of the public to understand and scrutinise their local authority’s spending, and connect with each other to generate ideas for delivering better value for money in public spending.

Excuse me? Most of the data hasn’t been published yet, there are several known organisations and groups (including OpenlyLocal) that have publicly stated they going to to be importing this data and doing things with it – visualising it, and allowing different views and analysis. Additionally, OpenlyLocal is already talking with several newspaper groups to help them re-use the data, and we are constantly evolving how we match and present the data.

Despite this, Nesta seems to have decided that it’s going to spend public money on coming up with a tendered solution to solve a problem that may be solved for zero cost by the private sector. Now I’m no roll-back-the-government red-in-tooth-and-claw free marketeer, but this is crazy, and I said as much to the person from Nesta.

Is the roundtable to decide whether the project should be done, or what should be done? I asked. The latter I was told. So, they’ve got some money and  have decided they’re going to spend it, even though the need may not be there. At a time when welfare payments are being cut, essential services are being slashed, for this sort of thing to happen is frankly outrageous.

There are other concerns here too – I personally think websites such as this are not suitable for a tender process, as that doesn’t encourage or often even allow the sort of agile, feedback-led process that produces the best websites. They also favour those who make their living by tendering.

So, Nesta, here’s a suggestion. Park this idea for 12 months, and in the meantime give the money back to the government. If you want to act as an angel funding then act as such (and the ones I’ve come across don’t do tendering). A reminder, your slogan is ‘making innovation flourish’, but sometimes that means stepping back and seeing what happens. This is not the way to building a Big Society

Written by countculture

November 17, 2010 at 2:27 pm

Open data, fraud… and some worrying advice

with 6 comments

One of the most commonly quoted concerns about publishing public data on the web is the potential for fraud – and certainly the internet has opened up all sorts of new routes to fraud, from Nigerian email scams, to phishing for bank accounts logins, to key-loggers to indentity theft.

Many of these work using two factors – the acceptance of things at face value (if it looks like an email from your bank, it is an email from them), and flawed processes designed to stop fraud but which inconvenience real users while making life easy from criminals.

I mention this because of some pending advice from the Local Government Association to councils regarding the publication of spending data, which strikes me as not just flawed, but highly dangerous and an invitation to fraudsters.

The issue surrounds something that may seem almost trivial, but bear with me – it’s important, and it’s off such trivialities that fraudsters profit.

In the original guidance for councils on publishing spending data we said that councils should publish both their internal supplier IDs and the supplier VAT numbers, as it would greatly aid the matching of supplier names to real-world companies, charities and other organisations, which is crucial in understanding where a local council’s money goes.

When the Local Government Association published its Guidance For Practitioners it removed those recommendations in order to prevent fraud. It has also suggested using the internal supplier ID as a unique key to confirm supplier identity. This betrays a startling lack of understanding, and worse opens up a serious vector to allow criminals to defraud councils of large sums of money.

Let’s take the VAT numbers first. The main issue here appears to be so-called missing trader fraud, whereby VAT is fraudulently claimed back from governments. Now it’s not clear to me that by publishing VAT numbers for supplier names that this fraud is made easier, and you would think the Treasury who recommend publishing the VAT numbers for suppliers in their guidance (PDF) would be alert to this (I’m told they did check with HMRC before issuing their guidance).

However, that’s not the point. If it’s about matching VAT numbers to supplier names there’s already several routes for doing this, with the ability to retrieve tens of thousands of them in the space of an hour or so, including this one:

http://www.google.co.uk/#sclient=psy&hl=en&q=%27vat+number+gb%27+site:com

Click on that link and you’ll get something like this:

Whether you’re a programmer or not, you should be able to see that it’s a trivial matter to go through those thousands of results and extract the company name and VAT number, and bingo, you’ve got that which the LGA is so keen for you not to have. So those who are wanting to match council suppliers don’t get the help a VAT number would give, and fraudsters aren’t disadvantaged at all.

Now, let’s turn to the rather more serious issue of internal Supplier IDs. Let me make it clear here, when matching council or central government suppliers, internal Supplier IDs are useful, make the job easier, and the matching more accurate, and also help with understanding how much in total redacted payees are receiving (you’d be concerned if a redacted person/company received £100,000 over the course of a year, and without some form of supplier ID you won’t know that). However, it’s not some life-or-death battle over principle for me.

The reason the LGA, however, is advising councils not to publish them is much more serious, and dangerous. In short, they are proposing to use the internal Supplier ID as a key to confirm the suppliers identity, and so allow the supplier to change details, including the supplier bank account (the case brought up here to justify this was the recent one of South Lanarkshire, which didn’t involve any information published as open data, just plain old fraudster ingenuity).

Just think about that for a moment, and then imagine that it’s the internal ID number they use for you in connection with paying your housing benefits. If you want to change your details, say you wanted to pay the money into a different bank account, you’d have to quote it – and just how many of us would have somewhere both safe to keep it and easy to find (and what about when you separated from your partner).

Similarly, where and how do we really think suppliers are going to keep this ID (stuck on a post-it note to the accounts receivable’s computer screen?), and what happens when they lose it? How do they identify themselves to find out what it is, and how will a council go about issuing a new one should the old one be compromised – is there any way of doing this except by setting up a new supplier record, with all the problems that brings.

And how easy would it be to do a day or two’s temping in a council’s accounts department and do a dump/printout of all the Supplier IDs, and then pass them onto fraudsters. The possibilities – for criminals – are almost limitless, and the Information Commissioner’s Office should put a stop to this at once if it is not to lose a serious amount of credibility.

But there’s an bigger underlying issue here, and it’s not that organisations such as the LGA don’t get data (although that is a problem), it’s that such bodies think that by introducing processes they can engineer out all risk, and that leads to bad decisions. Tell someone that suppliers changing bank accounts is very rare and should always be treated with suspicion and fraud becomes more difficult; tell someone that they should accept internal supplier IDs as proof of identity and it becomes easy.

Government/big-company bureaucrats not only think like government/big-company bureaucrats, they build processes that assumes everyone else does. The problem is that that both makes more difficult for ordinary citizens (as most encounters with bureaucracy make clear), and also makes it easy for criminals (who by definition don’t follow the rules).

Written by countculture

October 26, 2010 at 11:38 am

Opening up council accounts… and open procurement

with 8 comments

Since OpenlyLocal started pulling in council spending data, it’s niggled at me that it’s only half the story. Yes, as more and more data is published you’re beginning to get a much clearer idea of who’s paid what. And if councils publish it at a sufficient level of detail and consistently categorised, we’ll have a pretty good idea of what it’s spent on too.

However, useful though that is, that’s like taking a peak at a company’s bank statement and thinking it tells the whole story. Many of the payments relate to goods or services delivered some time in the past, some for things that have not yet been delivered, and there are all sorts of things (depreciation, movements between accounts, accruals for invoices not yet received) that won’t appear on there.

That’s what the council’s accounts are for — you know, those impenetrable things locked up in PDFs in some dusty corner of the council’s website, all sufficiently different from each other to make comparison difficult:

For some time, the holy grail for projects like OpenlyLocal and Where Does My Money Go has been to get the accounts in a standardized form to make comparison easy not just for accountants but for regular people too.

The thing is, such a thing does exist, and it’s sent by councils to central Government (the Department for Communities and Local Government to be precise) for them to use in their own figures. It’s a fairly hellishly complex spreadsheet called the Revenue Outturn form that must be filled in by the council (to get an idea have a look at the template here).

They’re not published anywhere by the DCLG, but they contain no state secrets or sensitive information; it’s just that the procedure being followed is the same one as they’ve always followed, and so they are not published, even after the statistics have been calculated from the data (the Statistics Act apparently prohibit publication until the stats have been published).

So I had an idea: wouldn’t it be great if we could pull the data that’s sitting in all these spreadsheets into a database and so allow comparison between councils’ accounts, thus freeing it from those forgotten corners of government computers.

This would seem to be a project that would be just about simple enough to be doable (though it’s trickier than it seems) and could allow ordinary people to understand their council’s spending in all sorts of ways (particularly if we add some of those sexy Where Does My Money Go visualisations). It could also be useful in ways that we can barely imagine  – some of the participatory budget experiments going in on in Redbridge and other councils would be even more useful if the context of similar councils spending was added to the mix.

So how would this be funded. Well, the usual route would be for DCLG or perhaps the one of the Local Government Association bodies such as IDeA to scope out a proposal, involving many hours of meetings, reams of paper, and running up thousands of pounds in costs, even before it’s started.

They’d then put the process out to tender, involving many more thousands in admin, and designed to attract those companies who specialise in tendering for public sector work. Each of those would want to ensure they make a profit, and so would work out how they’re going to do it before quoting, running up their own costs, and inflating the final price.

So here’s part two of my plan, instead going down that route, I’d come up with a proposal that would:

  • be a fraction of that cost
  • be specified on a single sheet of paper
  • paid for only if I delivered

Obviously there’s a clear potential conflict of interest here – I sit on the government’s Local Public Data Panel and am pushing strongly for open data, and also stand to benefit (depending on how good I am at getting the information out of those hundreds of spreadsheets, each with multiple worksheets, and matching the classification systems). The solution to that – I think – is to do the whole thing transparently, hence this blog post.

In a sense, what I’m proposing is that I scope out the project, solving those difficult problems of how to do it, with the bonus of instead of delivering a report, I deliver the project.

Is it a good thing to have all this data imported into a database, and shown not just on a website in a way non-accountants can understand, but also available to be combined with other data in mashups and visualisations? Definitely.

Is it a good deal for the taxpayer, and is this open procurement a useful way of doing things? Well you can read the proposal for yourself here, and I’d be really interested in comments both on the proposal and the novel procurement model.

New feature: one-click FoI requests for spending payments

with 4 comments

Thanks to the incredible work of Francis Irving at WhatDoTheyKnow, we’ve now added a feature I’ve wanted on OpenlyLocal since we started imported the local spending data: one-click Freedom of Information requests on individual spending items, especially those large ones.

This further lowers the barriers to armchair auditors wanting to understand where the money goes, and the request even includes all the usual ‘boilerplate’ to help avoid specious refusals. I’ve started it off with one to Wandsworth, whose poor quality of spending data I discussed last week.

And this is the result, the whole process having taken less than a minute:

The requests are also being tagged. This means that in the near future you’ll be able to see on a transaction page if any requests have already been made about it, and the status of those requests (we’re just waiting for WDTK to implement search by tags), which will be the beginning of a highly interconnected transparency ecosystem.

In the meantime it’s worth checking the transaction hasn’t been requested before confirming your request on the WDTK page (there’s a link to recent requests for the council on the WDTK form you get to after pressing the button).

I’m also trusting the community will use this responsibly, digging out information on the big stuff, rather than firing off multiple requests to the same council for hundreds of individual items (which would in any case probably be deemed vexatious under the terms of the FoI Act). At the moment the feature’s only enabled on transactions over £10,000.

Good places to start would be those multi-million-pound monthly payments which indicate big outsourcing deals, or large redacted payments (Birmingham’s got a few). Have a look at the spending dashboard for your council and see if there are any such payments.

A Local Spending Data wish… granted

with 25 comments

The very wonderful Stuart Harrison (aka pezholio), webmaster at Lichfield District Council, blogged yesterday with some thoughts about the publication of spending data following a local spending data workshop in Birmingham. Sadly I wasn’t able to attend this, but Stuart gives a very comprehensive account, and like all his posts it’s well worth reading.

In it he made an important observation about those at the workshop who were pushing for linked data from the beginning, and wished there was a solution. First the observation:

There did seem to be a bit of resistance to the linked data approach, mainly because agreeing standards seems to be a long, drawn out process, which is counter to the JFDI approach of publishing local data… I also recognise that there are difficulties in both publishing the data and also working with it… As we learned from the local elections project, often local authorities don’t even have people who are competent in HTML, let alone RDF, SPARQL etc.

He’s not wrong there. As someone who’s been publishing linked data for some time, and who conceived and ran the Open Election Data project Stuart refers to, working with numerous councils to help them publish linked data I’m probably as aware of the issues as anyone (ironically and I think significantly none of the councils involved in the local government e-standards body, and now pushing so hard for the linked data, has actually published any linked data themselves).

That’s not to knock linked data – just to be realistic about the issues and hurdles that need to be overcome (see the report for a full breakdown), and that to expect all the councils to solve all these problems at the same time as extracting the data from their systems, removing data relating to non-suppliers (e.g. foster parents), and including information from other systems (e.g. supplier data, which may be on procurement systems), and all by January, is  unrealistic at best, and could undermine the whole process.

So what’s to be done? I think the sensible thing, particularly in these straitened times, is to concentrate on getting the raw data out, and as much of it as possible, and come down hard on those councils who publish it badly (e.g. by locking it up in PDFs or giving it a closed licence), or who willfully ignore the guidance (it’s worrying how few councils publishing data at the moment don’t even include the transaction ID or date of the transaction, never mind supplier details).

Beyond that we should take the approach the web has always done, and which is the reason for its success: a decentralised, messy variety of implementations and solutions that allows a rich eco-system to develop, with government helping solve bottlenecks and structural problems rather than trying to impose highly centralised solutions that are already being solved elsewhere.

Yes, I’d love it if the councils were able to publish the data fully marked up, in a variety of forms (not just linked data, but also XML and JSON), but the ugly truth is that not a single council has so far even published their list of categories, never mind matched it up to a recognised standard (CIPFA BVACOP, COFOG or that used in their submissions to the CLG), still less done anything like linked data. So there’s a long way to go, and in the meantime we’re going to need some tools and cheap commodity services to bridge the gap.

[In a perfect world, maybe councils would develop some open-source tools to help them publish the data, perhaps using something like Adrian Short's Armchair Auditor code as the basis (this is a project that took a single council, WIndsor & Maidenhead, and added a web interface to the figures). However, when many councils don't even have competent HTML skills (having outsourced much of it), this is only going to happen at a handful of councils at best, unless considerable investment is made.]

Stuart had been thinking along similar lines, and made a suggestion, almost a wish in fact:

I think the way forward is a centralised approach, with authorities publishing CSVs in a standard format on their website and some kind of system picking up these CSVs (say, on a monthly basis) and converting this data to a linked data format (as well as publishing in vanilla XML, JSON and CSV format).

He then expanded on the idea, talking about a single URL for each transaction, standard identifiers, “a human-readable summary of the data, together with links to the actual data in RDF, XML, CSV and JSON”. I’m a bit iffy about that ‘centralised approach’ phrase (the web is all about decentralisation), but I do think there’s an opportunity to help both the community and councils by solving some of these problems.

And  that’s exactly what we’ve done at OpenlyLocal, adding the data from all the councils who’ve published their spending data, acting as a central repository, generating the URLs, and connecting the data together to other datasets and identifiers (councils with Snac IDs, companies with Companies House numbers). We’ve even extracted data from those councils who unhelpfully try to lock up their data as PDFs.

There are at time of writing 52,443 financial transactions from 9 councils in the OpenlyLocal database. And that’s not all, there’s also the following features:

  • Each transaction is tied to a supplier record for the council, and increasingly these are linked to company info (including their company number), or other councils (there’s a lot of money being transferred between councils), and users can add information about the supplier if we haven’t matched it up.
  • Every transaction, supplier and company has a permanent unique URL and is available as XML and JSON
  • We’ve sorted out some of the date issues (adding a date fuzziness field for those councils who don’t specify when in the month or quarter a transaction relates to).
  • Transactions are linked to the URL from which the file was downloaded (and usually the line number too, though obviously this is not possible if we’ve had to extract it from a PDF), meaning anyone else can recreate the dataset should they want to.
  • There’s an increasing amount of analysis, showing ordinary users spending by month, biggest suppliers and transactions, for example.
  • The whole spending dataset is available as a single, zipped CSV file to download for anyone else to use.
  • It’s all open data.

There are a couple of features Stuart mentions that we haven’t yet implemented, for good reason.

First, we’re not yet publishing it as linked data, for the simple reason that the vocabulary hasn’t yet been defined, nor even the standards on which it will be based. When this is done, we’ll add this as a representation.

And although we use standard identifiers such as SNAC ids for councils (and wards) on OpenlyLocal, the URL structure Stuart mentions is not yet practical, in part because SNAC ids doesn’t cover all authorities (doesn’t include the GLA, or other public bodies, for example), and only a tiny fraction of councils are publishing their internal transaction ids.

Also we haven’t yet implemented comments on the transactions for the simple reason that distributed comment systems such as Disqus are javascript-based and thus are problematic for those with accessibility issues, and site-specific ones don’t allow the conversation to be carried on elsewhere (we think we might have a solution to this, but it’s at an early stage, and we’d be interested to hear other idea).

But all in all, we reckon we’re pretty much there with Stuart’s wish list, and would hope that councils can get on with extracting the raw data, publishing it in an open, machine-readable format (such as CSV), and then move to linked data as their resources allow.

Written by countculture

August 3, 2010 at 7:45 am

Local Spending in OpenlyLocal: what features would you like to see?

with 2 comments

As I mentioned in a previous post, OpenlyLocal has now started importing council local spending data to make it comparable across councils and linkable to suppliers. We now added some more councils, and some more features, with some interesting results.

As well as the original set of Greater London Authority, Windsor & Maidenhead and Richmond upon Thames, we’ve added data from Uttlesford, King’s Lynn & West Norfolk and Surrey County Council (incidentally, given the size of Uttlesford and of King’s Lynn & West Norfolk, if they publish this data, any council should be able to).

We’ve also added a basic Spending Dashboard, to give an overview of the data we’ve imported so far:

Of course the data provided is of variable quality and in various formats. Some, like King’s Lynn & Norfolk are in simple, clean CSV files. Uttlesford have done it as a spreadsheet with each payment broken down to the relevant service, which is a bit messy to import but adds greater granularity than pretty much any other council.

Others, like Surrey, have taken the data that should be in a CSV file and for no apparent reason have put it in a PDF, which can be converted, but which is a bit of a pain to do, and means maunal intervention to what should be a largely automatic process (challenge for journos/dirt-hunters: is there anything in the data that they’d want to hide, or is it just pig-headedness).

But now we’ve got all that information in there we can start to analyse it, play with it, and ask questions about it, and we’ve started off by showing a basic dashboard for each council.

For each council, it’s got total spend, spend per month, number of suppliers & transactions, biggest suppliers and biggest transactions. It’s also got the spend per month (where a figure is given for a quarter, or two-month period, we’ve averaged it out over the relevant months). Here, for example, is the one for the Greater London Authority:

Lots of interesting questions here, from getting to understand all those leasing costs paid via the Amas Ltd Common Receipts Account, to what the £4m paid to Jack Morton Worldwide (which describes itself as a ‘global brand experience agency’) was for. Of course you can click on the supplier name for details of the transactions and any info that we’ve got on them (in this case it’s been matched to a company – but you can now submit info about a company if we haven’t matched it up).

You can then click on the transaction to find out more info on it, if that info was published, but which is perhaps the start of an FoI request either way:

It’s also worth looking at the Spend By Month, as a raw sanity-check. Here’s the dashboard for Windsor & Maidenhead:

See that big gap for July & August 09. My first thought was that there was an error with importing the data, which is perfectly possible, especially when the formatting changes frequently as it does in W&M’s data files, but looking at the actual file, there appear to be no entries for July & August 09 (I’ve notified them and hopefully we’ve get corrected data published soon). This, for me, is one of the advantages of visualizations: being able to easily spot anomalies in the data, that looking at tables or databases wouldn’t show.

So what further analyses would you like out of the box: average transaction size, number of transactions over £1m, percentage of transactions for a round number (i.e. with a zero at the end),  more visualizations? We’d love your suggestions – please leave them in the comments or tweet me.

Written by countculture

July 26, 2010 at 9:44 am

Some progress on the Local Spending/Spikes Cavell issue

with 5 comments

Yesterday I was invited to a meeting at the Department for Communities and Local Government with the key players in the local spending/Spikes Cavell issue that I’ve written about previous (see The open data that isn’t and Update on the local spending data scandal… the empire strikes back).

The meeting included Luke Spikes from Spikes Cavell as well as Andrew Larner from IESE (the Regional Improvement and Efficiency Partnership for the South East), which helped set up the deal, as well as myself and Nigel Shadbolt, who chairs the Local Public Data Panel and sits on the government’s Transparency Board. I won’t go into all the details, but the meeting was cordial and constructive, produced a lot of information about how the deal works and also potentially made progress in terms of solving some of the key issues.

We can now, for example, start to understand the deal – it’s called the Transform project – which as I understand it is a package deal to take raw information from the councils accounts and other systems (e.g. purchase & procurement systems) to SC’s specification, clean up and depersonalise the data, then analyse to show the councils potential savings/improvements, and finally to publish a cut of this information on the Spotlight on Spend website. Essentially we have this:

There are still some details missing from this picture – we haven’t yet seen the Memorandum of Understanding which frames the deal, nor the specification of the raw information that is provided to Spikes Cavell, but we have been promised both of these imminently. This last one in particular will be very useful as it will allow us to refine the advice we are giving councils about the data they should be publishing in order to make the spending information useful and comparable (it’s not been suggested previously, for example, that it would be useful to include details from the council’s procurement systems, though in hindsight this makes a lot of sense).

Crucially, it was also agreed that all the input data into Spikes Cavell’s proprietary systems (the ‘Cleaned-up but non-proprietary data’ in the diagram above) would be published, so the wider community would be on the same footing as Spikes Cavell as far as access to the raw data goes. This is crucial and worth repeating: it means that anyone else will have access to the same base data as Spikes Cavell, and the playing field is therefore pretty much level.

There are still issues to be sorted out, the chief of which is that while Spikes Cavell is happy to publish the raw data under a completely open licence, they will require the OK of the council to do so. (However, armed with this knowledge it will be easy to identify those councils that refuse, and then possible to tackle them either through persuasion or ultimately legislation.)

The other issues are, briefly: liability for depersonalising the data; where the data is published (I think it should be on the council’s own website or a data.gov.uk, or for London councils the London Datastore, not on the Spotlight On Spend website); whether the Spotlight On Spend website itself is necessary and cost-effective (it’s impossible to know how much it costs as it’s bundled in with the whole deal); and whether the data-cleansing should be stripped out from the rest of the deal.

However, it’s worth saying that this agreement goes beyond just the member councils of the IESE, but to all councils that in the future use a similar agreement (obviously it’s ultimately up to them, but certainly this was the wish of everyone at the meeting).

Finally, I’d like to thank Andrew Larner at IESE for his open approach, and for Spikes Cavell for their willingness to engage. What we have here isn’t perfect (and I still fundamentally believe that councils should be doing the cleansing and publishing of the data themselves, and exchanging that knowledge with other councils and using it to improve their own data processes), but it’s a big step forward in genuinely opening up raw council data.

Update: The official notes of the meeting have now been published on the Local Public Data panel blog: http://data.gov.uk/blog/local-public-data-panel-%E2%80%93-sub-group-meeting-spotlight-spend-20-july-2010

Written by countculture

July 20, 2010 at 10:57 pm

Follow

Get every new post delivered to your Inbox.

Join 79 other followers