Shallow Thoughts : tags : government

Akkana's Musings on Open Source Computing and Technology, Science, and Nature.

Wed, 01 Feb 2023

The 2023 Legislative Session, Student Advocates, and Hope for the Future

This year's New Mexico legislative session started Jan 17 and runs through Mar 18. As usual, they have a full schedule.

Also as usual, I've been scrambling with updates to the New Mexico Bill Tracker. This year's new feature is tags; I seeded it with a few tags I use, like health and elections, plus an LWVNM tag for bills the League of Women Voters is tracking and advocating for or against. But the list has grown quite a bit from there, and it's been fun to watch what tags other people are interested in.

One bill of particular interest this session is HB134: MENSTRUAL PRODUCTS IN SCHOOL BATHROOMS. It's driven by three Albuquerque Academy high school students, seniors Noor Ali, Sophia Liem and Mireya Macías.

Read more ...

Tags: ,
[ 17:59 Feb 01, 2023    More politics | permalink to this entry | ]

Tue, 23 Mar 2021

Writing a Bill

I've been super busy this month. The New Mexico Legislature was in session, and in addition to other projects, I've had a chance to be involved in the process of writing a new bill and helping it move through the legislature. It's been interesting, educational, and sometimes frustrating.

The bill is SB304: Voting District Geographic Data. It's an "open data" bill: it mandates that election district boundary data for all voting districts, down to the county and municipal level, be publicly available at no charge on the Secretary of State's website.

Read more ...

Tags: , , , , ,
[ 13:28 Mar 23, 2021    More politics | permalink to this entry | ]

Thu, 21 Jan 2021

Track Bills in the 2021 New Mexico Legislative Session

This year's New Mexico Legislative Session started Tuesday. For the last few weeks I've been madly scrambling to make sure the bugs are out of some of the New Mexico Bill Tracker's new features: notably, it now lets you switch between the current session and past sessions, and I cleaned up the caching code that tries to guard against hitting the legislative website too often.

Read more ...

Tags: , , , ,
[ 17:50 Jan 21, 2021    More politics | permalink to this entry | ]

Fri, 25 Jan 2019

Announcing the New Mexico Bill Tracker

For the last few weeks I've been consumed with a project I started last year and then put aside for a while: a bill tracker.

The project sprung out of frustration at the difficulty of following bills as they pass through the New Mexico legislature. Bills I was interested in would die in committee, or they would make it to a vote, and I'd read about it a few days later and wish I'd known that it was a good time to write my representative or show up at the Roundhouse to speak. (I've never spoken at the Roundhouse, and whether I'd have the courage to actually do it remains to be seen, but at least I'd like to have the chance to decide.)

New Mexico has a Legislative web site where you can see the status of each bill, and they even offer a way to register and save a list of bills; but then there's no way to get alerts about bills that change status and might be coming up for debate.

New Mexico legislative sessions are incredibly short: 60 days in odd years, 30 days in even. During last year's 30-day session, I wrote some Python code that scraped the HTML pages describing a bill, extract the useful information like when the bill last changed status and where it was right now, present the information in a table where the user could easily scan it, and email the user a daily summary. Fortunately, the nmlegis.gov site, while it doesn't offer raw data for bill status, at least uses lots of id tags in its HTML which make them relatively easy to scrape.

Then the session ended and there was no further way to test it, since bills' statuses were no longer changing. So the billtracker moved to the back burner.

In the runup to this year's 60-day session, I started with Flask, a lightweight Python web library I've used for a couple of small projects, and added some extensions that help Flask handle tasks like user accounts. Then I patched in the legislative web scraping code from last year, and the result was The New Mexico Bill Tracker. I passed the word to some friends in the League of Women Voters and the Sierra Club to help me test it, and I think (hope) it's ready for wider testing.

There's lots more I'd like to do, of course. I still have no way of knowing when a bill will be up for debate. It looks like this year the Legislative web site is showing committ schedules in a fairly standard way, as opposed to the unparseable PDFs they used in past years, so I may be able to get that. Not that legislative committees actually stick to their published schedules; but at least it's a start.

New Mexico readers (or anyone else interested in following the progress of New Mexico bills) are invited to try it. Let me know about any problems you encounter. And if you want to adapt the billtracker for use in another state, send me a note! I'd love to see it extended and would be happy to work with you. Here's the source: BillTracker on GitHub.

Tags: , , , ,
[ 12:34 Jan 25, 2019    More politics | permalink to this entry | ]

Thu, 12 Oct 2017

Letter to the New Mexico Public Education Department on Science Standards

For those who haven't already read about the issue in the national press, New Mexico's Public Education Department (a body appointed by the governor) has a proposal regarding new science standards for all state schools. The proposal starts with the national Next Generation Science Standards but then makes modifications, omitting points like references to evolution and embryological development or the age of the Earth and adding a slew of NM-specific standards that are mostly sociological rather than scientific.

You can read more background in the Mother Jones article, New Mexico Doesn’t Want Your Kids to Know How Old the Earth Is. Or why it’s getting warmer, including links to the proposed standards. Ars Technica also covered it: Proposed New Mexico science standards edit out basic facts.

New Mexico residents have until 5.p.m. next Monday, October 16, to speak out about the proposal. Email comments to rule.feedback@state.nm.us or send snail mail (it must arrive by Monday) to Jamie Gonzales, Policy Division, New Mexico Public Education Department, Room 101, 300 Don Gaspar Avenue, Santa Fe, New Mexico 87501.

A few excellent letters people have already written:

I'm sure they said it better than I can. But every voice counts -- they'll be counting letters! So here's my letter. If you live in New Mexico, please send your own. It doesn't have to be long: the important thing is that you begin by stating your position on the proposed standards.


Members of the PED:

Please reconsider the proposed New Mexico STEM-Ready Science Standards, and instead, adopt the nationwide Next Generation Science Standards (NGSS) for New Mexico.

With New Mexico schools ranking at the bottom in every national education comparison, and with New Mexico hurting for jobs and having trouble attracting technology companies to our state, we need our students learning rigorous, established science.

The NGSS represents the work of people in 26 states, and is being used without change in 18 states already. It's been well vetted, and there are many lesson plans, textbooks, tests and other educational materials available for it.

The New Mexico Legislature supports NGSS: they passed House Bill 211 in 2017 (vetoed by Governor Martinez) requiring adoption of the NGSS. The PED's own Math and Science Advisory Council (MSAC) supports NGSS: they recommended in 2015 that it be adopted. Why has the PED ignored the legislature and its own advisory council?

Using the NGSS without New Mexico changes will save New Mexico money. The NGSS is freely available. Open source textbooks and lesson plans are already available for the NGSS, and more are coming. In contrast, the New Mexico Stem-Ready standards would be unique to New Mexico: not only would we be left out of free nationwide educational materials, but we'd have to pay to develop New Mexico-specific curricula and textbooks that couldn't be used anywhere else, and the resulting textbooks would cost far more than standard texts. Most of this money would go to publishers in other states.

New Mexico consistently ranks at the bottom in educational comparisons. Yet nearly 15% of the PED's proposed stem-ready standards are New Mexico specific standards, taught nowhere else, and will take time away from teaching core science concepts. Where is the evidence that our state standards would be better than what is taught in other states? Who are we to think we can write better standards than a nationwide coalition?

In addition, some of the changes in the proposed NM STEM-Ready Science Standards seem to be motivated by political ideology, not science. Science standards used in our schools should be based on widely accepted scientific principles. Not to mention that the national coverage on this issue is making our state a laughingstock.

Finally, the lack of transparency in the NMSRSS proposal is alarming. Who came up with the proposed NMSRSS standards? Are there any experts in science education that support them? Is there any data to indicate they'd be more effective than the NGSS? Why wasn't the development of the NMSRSS discussed in open PED meetings as required by the Open Meetings Act?

The NGSS are an established, well regarded national standard. Don't shortchange New Mexico students by teaching them watered-down science. Please discard the New Mexico Stem-Ready proposal and adopt the Next Generation Science Standards, without New Mexico-specific changes.

Tags: , , ,
[ 10:16 Oct 12, 2017    More politics | permalink to this entry | ]

Thu, 19 Jan 2017

Plotting Shapes with Python Basemap wwithout Shapefiles

In my article on Plotting election (and other county-level) data with Python Basemap, I used ESRI shapefiles for both states and counties.

But one of the election data files I found, OpenDataSoft's USA 2016 Presidential Election by county had embedded county shapes, available either as CSV or as GeoJSON. (I used the CSV version, but inside the CSV the geo data are encoded as JSON so you'll need JSON decoding either way. But that's no problem.)

Just about all the documentation I found on coloring shapes in Basemap assumed that the shapes were defined as ESRI shapefiles. How do you draw shapes if you have latitude/longitude data in a more open format?

As it turns out, it's quite easy, but it took a fair amount of poking around inside Basemap to figure out how it worked.

In the loop over counties in the US in the previous article, the end goal was to create a matplotlib Polygon and use that to add a Basemap patch. But matplotlib's Polygon wants map coordinates, not latitude/longitude.

If m is your basemap (i.e. you created the map with m = Basemap( ... ), you can translate coordinates like this:

    (mapx, mapy) = m(longitude, latitude)

So once you have a region as a list of (longitude, latitude) coordinate pairs, you can create a colored, shaped patch like this:

    for coord_pair in region:
        coord_pair[0], coord_pair[1] = m(coord_pair[0], coord_pair[1])
    poly = Polygon(region, facecolor=color, edgecolor=color)
    ax.add_patch(poly)

Working with the OpenDataSoft data file was actually a little harder than that, because the list of coordinates was JSON-encoded inside the CSV file, so I had to decode it with json.loads(county["Geo Shape"]). Once decoded, it had some counties as a Polygonlist of lists (allowing for discontiguous outlines), and others as a MultiPolygonlist of list of lists (I'm not sure why, since the Polygon format already allows for discontiguous boundaries)

[Blue-red-purple 2016 election map]

And a few counties were missing, so there were blanks on the map, which show up as white patches in this screenshot. The counties missing data either have inconsistent formatting in their coordinate lists, or they have only one coordinate pair, and they include Washington, Virginia; Roane, Tennessee; Schley, Georgia; Terrell, Georgia; Marshall, Alabama; Williamsburg, Virginia; and Pike Georgia; plus Oglala Lakota (which is clearly meant to be Oglala, South Dakota), and all of Alaska.

One thing about crunching data files from the internet is that there are always a few special cases you have to code around. And I could have gotten those coordinates from the census shapefiles; but as long as I needed the census shapefile anyway, why use the CSV shapes at all? In this particular case, it makes more sense to use the shapefiles from the Census.

Still, I'm glad to have learned how to use arbitrary coordinates as shapes, freeing me from the proprietary and annoying ESRI shapefile format.

The code: Blue-red map using CSV with embedded county shapes

Tags: , , , , , , , , ,
[ 09:36 Jan 19, 2017    More programming | permalink to this entry | ]

Sat, 14 Jan 2017

Plotting election (and other county-level) data with Python Basemap

After my arduous search for open 2016 election data by county, as a first test I wanted one of those red-blue-purple charts of how Democratic or Republican each county's vote was.

I used the Basemap package for plotting. It used to be part of matplotlib, but it's been split off into its own toolkit, grouped under mpl_toolkits: on Debian, it's available as python-mpltoolkits.basemap, or you can find Basemap on GitHub.

It's easiest to start with the fillstates.py example that shows how to draw a US map with different states colored differently. You'll need the three shapefiles (because of ESRI's silly shapefile format): st99_d00.dbf, st99_d00.shp and st99_d00.shx, available in the same examples directory.

Of course, to plot counties, you need county shapefiles as well. The US Census has county shapefiles at several different resolutions (I used the 500k version). Then you can plot state and counties outlines like this:

from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt

def draw_us_map():
    # Set the lower left and upper right limits of the bounding box:
    lllon = -119
    urlon = -64
    lllat = 22.0
    urlat = 50.5
    # and calculate a centerpoint, needed for the projection:
    centerlon = float(lllon + urlon) / 2.0
    centerlat = float(lllat + urlat) / 2.0

    m = Basemap(resolution='i',  # crude, low, intermediate, high, full
                llcrnrlon = lllon, urcrnrlon = urlon,
                lon_0 = centerlon,
                llcrnrlat = lllat, urcrnrlat = urlat,
                lat_0 = centerlat,
                projection='tmerc')

    # Read state boundaries.
    shp_info = m.readshapefile('st99_d00', 'states',
                               drawbounds=True, color='lightgrey')

    # Read county boundaries
    shp_info = m.readshapefile('cb_2015_us_county_500k',
                               'counties',
                               drawbounds=True)

if __name__ == "__main__":
    draw_us_map()
    plt.title('US Counties')
    # Get rid of some of the extraneous whitespace matplotlib loves to use.
    plt.tight_layout(pad=0, w_pad=0, h_pad=0)
    plt.show()
[Simple map of US county borders]

Accessing the state and county data after reading shapefiles

Great. Now that we've plotted all the states and counties, how do we get a list of them, so that when I read out "Santa Clara, CA" from the data I'm trying to plot, I know which map object to color?

After calling readshapefile('st99_d00', 'states'), m has two new members, both lists: m.states and m.states_info.

m.states_info[] is a list of dicts mirroring what was in the shapefile. For the Census state list, the useful keys are NAME, AREA, and PERIMETER. There's also STATE, which is an integer (not restricted to 1 through 50) but I'll get to that.

If you want the shape for, say, California, iterate through m.states_info[] looking for the one where m.states_info[i]["NAME"] == "California". Note i; the shape coordinates will be in m.states[i]n (in basemap map coordinates, not latitude/longitude).

Correlating states and counties in Census shapefiles

County data is similar, with county names in m.counties_info[i]["NAME"]. Remember that STATE integer? Each county has a STATEFP, m.counties_info[i]["STATEFP"] that matches some state's m.states_info[i]["STATE"].

But doing that search every time would be slow. So right after calling readshapefile for the states, I make a table of states. Empirically, STATE in the state list goes up to 72. Why 72? Shrug.

    MAXSTATEFP = 73
    states = [None] * MAXSTATEFP
    for state in m.states_info:
        statefp = int(state["STATE"])
        # Many states have multiple entries in m.states (because of islands).
        # Only add it once.
        if not states[statefp]:
            states[statefp] = state["NAME"]

That'll make it easy to look up a county's state name quickly when we're looping through all the counties.

Calculating colors for each county

Time to figure out the colors from the Deleetdk election results CSV file. Reading lines from the CSV file into a dictionary is superficially easy enough:

    fp = open("tidy_data.csv")
    reader = csv.DictReader(fp)

    # Make a dictionary of all "county, state" and their colors.
    county_colors = {}
    for county in reader:
        # What color is this county?
        pop = float(county["votes"])
        blue = float(county["results.clintonh"])/pop
        red = float(county["Total.Population"])/pop
        county_colors["%s, %s" % (county["name"], county["State"])] \
            = (red, 0, blue)

But in practice, that wasn't good enough, because the county names in the Deleetdk names didn't always match the official Census county names.

Fuzzy matches

For instance, the CSV file had no results for Alaska or Puerto Rico, so I had to skip those. Non-ASCII characters were a problem: "Doña Ana" county in the census data was "Dona Ana" in the CSV. I had to strip off " County", " Borough" and similar terms: "St Louis" in the census data was "St. Louis County" in the CSV. Some names were capitalized differently, like PLYMOUTH vs. Plymouth, or Lac Qui Parle vs. Lac qui Parle. And some names were just different, like "Jeff Davis" vs. "Jefferson Davis".

To get around that I used SequenceMatcher to look for fuzzy matches when I couldn't find an exact match:

def fuzzy_find(s, slist):
    '''Try to find a fuzzy match for s in slist.
    '''
    best_ratio = -1
    best_match = None

    ls = s.lower()
    for ss in slist:
        r = SequenceMatcher(None, ls, ss.lower()).ratio()
        if r > best_ratio:
            best_ratio = r
            best_match = ss
    if best_ratio > .75:
        return best_match
    return None

Correlate the county names from the two datasets

It's finally time to loop through the counties in the map to color and plot them.

Remember STATE vs. STATEFP? It turns out there are a few counties in the census county shapefile with a STATEFP that doesn't match any STATE in the state shapefile. Mostly they're in the Virgin Islands and I don't have election data for them anyway, so I skipped them for now. I also skipped Puerto Rico and Alaska (no results in the election data) and counties that had no corresponding state: I'll omit that code here, but you can see it in the final script, linked at the end.

    for i, county in enumerate(m.counties_info):
        countyname = county["NAME"]
        try:
            statename = states[int(county["STATEFP"])]
        except IndexError:
            print countyname, "has out-of-index statefp of", county["STATEFP"]
            continue

        countystate = "%s, %s" % (countyname, statename)
        try:
            ccolor = county_colors[countystate]
        except KeyError:
            # No exact match; try for a fuzzy match
            fuzzyname = fuzzy_find(countystate, county_colors.keys())
            if fuzzyname:
                ccolor = county_colors[fuzzyname]
                county_colors[countystate] = ccolor
            else:
                print "No match for", countystate
                continue

        countyseg = m.counties[i]
        poly = Polygon(countyseg, facecolor=ccolor)  # edgecolor="white"
        ax.add_patch(poly)

Moving Hawaii

Finally, although the CSV didn't have results for Alaska, it did have Hawaii. To display it, you can move it when creating the patches:

    countyseg = m.counties[i]
    if statename == 'Hawaii':
        countyseg = list(map(lambda (x,y): (x + 5750000, y-1400000), countyseg))
    poly = Polygon(countyseg, facecolor=countycolor)
    ax.add_patch(poly)
The offsets are in map coordinates and are empirical; I fiddled with them until Hawaii showed up at a reasonable place. [Blue-red-purple 2016 election map]

Well, that was a much longer article than I intended. Turns out it takes a fair amount of code to correlate several datasets and turn them into a map. But a lot of the work will be applicable to other datasets.

Full script on GitHub: Blue-red map using Census county shapefile

Tags: , , , , , , , , , , ,
[ 15:10 Jan 14, 2017    More programming | permalink to this entry | ]

Thu, 12 Jan 2017

Getting Election Data, and Why Open Data is Important

Back in 2012, I got interested in fiddling around with election data as a way to learn about data analysis in Python. So I went searching for results data on the presidential election. And got a surprise: it wasn't available anywhere in the US. After many hours of searching, the only source I ever found was at the UK newspaper, The Guardian.

Surely in 2016, we're better off, right? But when I went looking, I found otherwise. There's still no official source for US election results data; there isn't even a source as reliable as The Guardian this time.

You might think Data.gov would be the place to go for official election results, but no: searching for 2016 election on Data.gov yields nothing remotely useful.

The Federal Election Commission has an election results page, but it only goes up to 2014 and only includes the Senate and House, not presidential elections. Archives.gov has popular vote totals for the 2012 election but not the current one. Maybe in four years, they'll have some data.

After striking out on official US government sites, I searched the web. I found a few sources, none of them even remotely official.

Early on I found Simon Rogers, How to Download County-Level Results Data, which leads to GitHub user tonmcg's County Level Election Results 12-16. It's a comparison of Democratic vs. Republican votes in the 2012 and 2016 elections (I assume that means votes for that party's presidential candidate, though the field names don't make that entirely clear), with no information on third-party candidates.

KidPixo's Presidential Election USA 2016 on GitHub is a little better: the fields make it clear that it's recording votes for Trump and Clinton, but still no third party information. It's also scraped from the New York Times, and it includes the scraping code so you can check it and have some confidence on the source of the data.

Kaggle claims to have election data, but you can't download their datasets or even see what they have without signing up for an account. Ben Hamner has some publically available Kaggle data on GitHub, but only for the primary. I also found several companies selling election data, and several universities that had datasets available for researchers with accounts at that university.

The most complete dataset I found, and the only open one that included third party candidates, was through OpenDataSoft. Like the other two, this data is scraped from the NYT. It has data for all the minor party candidates as well as the majors, plus lots of demographic data for each county in the lower 48, plus Hawaii, but not the territories, and the election data for all the Alaska counties is missing.

You can get it either from a GitHub repo, Deleetdk's USA.county.data (look in inst/ext/tidy_data.csv. If you want a larger version with geographic shape data included, clicking through several other opendatasoft pages eventually gets you to an export page, USA 2016 Presidential Election by county, where you can download CSV, JSON, GeoJSON and other formats.

The OpenDataSoft data file was pretty useful, though it had gaps (for instance, there's no data for Alaska). I was able to make my own red-blue-purple plot of county voting results (I'll write separately about how to do that with python-basemap), and to play around with statistics.

Implications of the lack of open data

But the point my search really brought home: By the time I finally found a workable dataset, I was so sick of the search, and so relieved to find anything at all, that I'd stopped being picky about where the data came from. I had long since given up on finding anything from a known source, like a government site or even a newspaper, and was just looking for data, any data.

And that's not good. It means that a lot of the people doing statistics on elections are using data from unverified sources, probably copied from someone else who claimed to have scraped it, using unknown code, from some post-election web page that likely no longer exists. Is it accurate? There's no way of knowing.

What if someone wanted to spread news and misinformation? There's a hunger for data, particularly on something as important as a US Presidential election. Looking at Google's suggested results and "Searches related to" made it clear that it wasn't just me: there are a lot of people searching for this information and not being able to find it through official sources.

If I were a foreign power wanting to spread disinformation, providing easily available data files -- to fill the gap left by the US Government's refusal to do so -- would be a great way to mislead people. I could put anything I wanted in those files: there's no way of checking them against official results since there are no official results. Just make sure the totals add up to what people expect to see. You could easily set up an official-looking site and put made-up data there, and it would look a lot more real than all the people scraping from the NYT.

If our government -- or newspapers, or anyone else -- really wanted to combat "fake news", they should take open data seriously. They should make datasets for important issues like the presidential election publically available, as soon as possible after the election -- not four years later when nobody but historians care any more. Without that, we're leaving ourselves open to fake news and fake data.

Tags: , , , , , , ,
[ 16:41 Jan 12, 2017    More politics | permalink to this entry | ]

Tue, 11 Oct 2016

New Mexico LWV Voter Guides are here!

[Vote button] I'm happy to say that our state League of Women Voters Voter Guides are out for the 2016 election.

My grandmother was active in the League of Women Voters most of her life (at least after I was old enough to be aware of such things). I didn't appreciate it at the time -- and I also didn't appreciate that she had been born in a time when women couldn't legally vote, and the 19th amendment, giving women the vote, was ratified just a year before she reached voting age. No wonder she considered the League so important!

The LWV continues to work to extend voting to people of all genders, races, and economic groups -- especially important in these days when the Voting Rights Act is under attack and so many groups are being disenfranchised. But the League is important for another reason: local LWV chapters across the country produce detailed, non-partisan voter guides for each major election, which are distributed free of charge to voters. In many areas -- including here in New Mexico -- there's no equivalent of the "Legislative Analyst" who writes the lengthy analyses that appear on California ballots weighing the pros, cons and financial impact of each measure. In the election two years ago, not that long after Dave and I moved here, finding information on the candidates and ballot measures wasn't easy, and the LWV Voter Guide was by far the best source I saw. It's the main reason I joined the League, though I also appreciate the public candidate forums and other programs they put on.

LWV chapters are scrupulous about collecting information from candidates in a fair, non-partisan way. Candidates' statements are presented exactly as they're received, and all candidates are given the same specifications and deadlines. A few candidates ignored us this year and didn't send statements despite repeated emails and phone calls, but we did what we could.

New Mexico's state-wide voter guide -- the one I was primarily involved in preparing -- is at New Mexico Voter Guide 2016. It has links to guides from three of the four local LWV chapters: Los Alamos, Santa Fe, and Central New Mexico (Albuquerque and surrounding areas). The fourth chapter, Las Cruces, is still working on their guide and they expect it soon.

I was surprised to see that our candidate information doesn't include links to websites or social media. Apparently that's not part of the question sheet they send out, and I got blank looks when I suggested we should make sure to include that next time. The LWV does a lot of important work but they're a little backward in some respects. That's definitely on my checklist for next time, but for now, if you want a candidate's website, there's always Google.

I also helped a little on Los Alamos's voter guide, making suggestions on how to present it on the website (I maintain the state League website but not the Los Alamos site), and participated in the committee that wrote the analysis and pro and con arguments for our contentious charter amendment proposal to eliminate the elective office sheriff. We learned a lot about the history of the sheriff's office here in Los Alamos, and about state laws and insurance rules regarding sheriffs, and I hope the important parts of what we learned are reflected in both sides of the argument.

The Voter Guides also have a link to a Youtube recording of the first Los Alamos LWV candidate forum, featuring NM House candidates, DA, Probate judge and, most important, the debate over the sheriff proposition. The second candidate forum, featuring US House of Representatives, County Council and County Clerk candidates, will be this Thursday, October 13 at 7 (refreshments at 6:30). It will also be recorded thanks to a contribution from the AAUW.

So -- busy, busy with election-related projects. But I think the work is mostly done (except for the one remaining forum), the guides are out, and now it's time to go through and read the guides. And then the most important part of all: vote!

Tags: , , ,
[ 16:08 Oct 11, 2016    More politics | permalink to this entry | ]

Mon, 06 Aug 2007

Votes on the Warrantless Wiretapping Act

All the news media carried stories on how our (US) legislators voted in a bill on Friday night that greatly eased the rules on wiretapping. The House followed through and passed the bill on Saturday.

The new updates to FISA, the Foreign Intelligence Surveillance Act, will allow the NSA or the attorney general to authorize monitoring of telephones or email, without a warrant, if the comunications involve people "reasonably believed to be outside the United States".

The story reported in most of the papers is that Democrats were against the bill and wanted a version which required warrants in more cases. But the President threatened to hold Congress in session into its scheduled summer recess if it did not approve the changes he wanted -- and that was enough, apparently, for the Senate to vote for warrantless surveillance of Americans. (I confess I don't quite understand why the president can hold Congress in session indefinitely until he gets the vote he wants. Can't they just vote No?)

What I couldn't find in any of the stories was a breakdown of the votes. What about our presidential candidates? Did they support warrantless wiretapping -- or, perhaps worse, just not care about the ramifications of a bill if further consideration of it might cut into their vacation time?

Finding out

Finding Senate votes is very easy. Googling for senate votes takes you right to the Senate.gov breakdown of recent votes by Senator name or by state. Here are the results for S.1927.

The House is harder. They don't seem to have a nice "recent votes" page like the Senate does, or any obvious way to find bills (I had little luck with their site search), though a pressec.com story gave a link to the bill on Thomas.loc.gov, which links to an official House.gov vote count.

In the absence of pressec.com's help, the easiest way to find House voting records is to use the Washington Post Votes Database.

How did they vote?

I was happy to see that all the major Democratic candidates in Congress voted against the smarmily named "Protect America Act", including Hillary Clinton, Barack Obama, Joe Biden, and Christopher Dodd, and (in the House) Dennis Kucinich. John Kerry (who is not an official candidate) didn't vote.

On the Republican side, candidate Sam Brownback voted for the bill, while candidates John McCain, Tom Tancredo and Ron Paul didn't vote.

Of course, I was also interested in my local legislators. California Senator Dianne Feinstein voted for passage (why do people keep voting her back in?) while our other senator, Barbara Boxer didn't vote. In the House, my representative, the always sensible Zoe Lofgren, voted against the bill. In fact, she spoke out against it, saying "This bill would grant the attorney general the ability to wiretap anybody, any place, any time without court review, without any checks and balances. I think this unwarranted, unprecedented measure would simply eviscerate the 4th Amendment." Hurray, Zoe! House Speaker Nancy Pelosi also voted against.

How did your legislators vote?

Tags: , ,
[ 14:20 Aug 06, 2007    More politics | permalink to this entry | ]

Thu, 06 Jul 2006

Was the 2004 Election Stolen?

Anyone following the voting machine controversy in the last presidential election -- or, even more, anyone who wasn't following it and might not be aware of the issues -- should check out Robert F. Kennedy, Jr's article in Rolling Stone, Was the 2004 Election Stolen?

The article is long, detailed and well researched article, and it will make you question whether we really live in a democracy.

Apparently Kennedy is considering filing whistleblower lawsuits against two of the voting machine companies. This won't do anything to change our national elections, but at least it might help get the word -- and the evidence -- out into the public eye.

Tags: , , ,
[ 12:36 Jul 06, 2006    More politics | permalink to this entry | ]

Thu, 19 Aug 2004

Wrong Time for an E-Vote Glitch

Wired has had great coverage of the e-voting fiasco all along, but the latest story is particularly impressive: Wrong Time for an E-Vote Glitch. Sequoia Systems (suppliers, to our shame, for Santa Clara county, though at least they're not as bad as Diebold) had a demo for the California state senate of their new paper-trail system. Turned out that their demo failed to print paper trails for any of the spanish language ballots in the demo. It wasn't just a random glitch: they tried it several times, and every time, it failed to print the spanish voters' paper trail.

What a classic. I wish advocates for the Spanish-speaking community would seize on this and help to fight e-voting.

Sequoia, of course, is claiming that it wouldn't happen in a real election, that the problem was they didn't proofread the Spanish ballot but they would for a real election. I'm sure that makes everyone feel all better.

Other news mentioned in the article: the California bill to require a paper trail has stalled, and everyone thinks that's mysterious because it supposedly had bipartisan support.

They don't mention whether that's the same bill which would have allowed voters to choose a paper ballot rather than a touchscreen machine. That's important, since those of us who don't trust the touchscreen machines need to know in time to request absentee ballots, if we can't use paper ballots at the polls.

Tags: , , , ,
[ 18:35 Aug 19, 2004    More politics/rights | permalink to this entry | ]