Nov 10 Excerpt: Audit Integrity, Organization, and Chain Of Custody

A. Integrity of the Random District Drawing

A new concern uncovered this year is the inaccurate list of districts used in the random selection process which is required by law to be based on all of the districts in use for the election or primary. This directly impacts the integrity and credibility of the entire post-election audit.

In our November 2009 Report:

We noted that the public has no reliable mechanism for checking the accuracy of the districts used in the random selection process. Checking with the Secretary of the State’s Office indicated that they do not have a list of districts that is guaranteed to be up to date.

In our August 2010 Report:

We noted that our concerns were realized in this [August 2010] audit when non-existent and ambiguously identified districts were chosen to be audited. The selection process also may not have included some districts used in the election.

Once again, in the November 2010 random selection an advocate, without extensive investigation, quickly discovered a district missing from the list of districts in the random drawing. Even though the Secretary of the State believed that Bridgeport would voluntarily audit twelve (12) districts in the Governor’s race, wisely the Bridgeport districts were included in the drawing.  However, Bridgeport had twenty-five (25) districts in the election and only twenty-four (24) were included in the list of those in the drawing supplied to us by the Secretary of the State’s Office.

When districts move or are identified in various ways – with and without district numbers, with and without polling place location – it can be challenging or almost  impossible to verify that the list of polling places for the drawing is accurate or that the selected district is actually the one audited.

After the fact, it is possible to discover non-existent districts that were selected, but it would be quite challenging to identify districts not included in the selection list from the 169 towns. In either case, there is no current, established legal or procedural means to restore the integrity of an audit based on a discovered inaccuracy.

An accurate, verifiable list of districts for selection is critical to the integrity of the audit. Missing or incorrectly specified districts can be the result of error or deliberate action on the part of election officials. If all discovered inaccuracies in the list are dismissed as errors, then the opportunity is opened for cover-ups, for fraud or for steering the audit away from particular districts.

B. Procedures Unenforceable, Current Laws Insufficient

As we have noted in previous reports, discussions with representatives of the Secretary of the State’s Office and the State Elections Enforcement Commission (SEEC) indicate that many, if not all, of the post-election audit procedures, including those covering chain-of-custody, are unenforceable.  There is no incentive for following the procedures and no penalty for disregarding them.

We note that the adherence to prescribed chain-of-custody and ballot security procedures varies widely among audited districts. Laws that govern the sealing of ballots, memory cards, and tabulators after an election are unclear. Ballots are not uniformly maintained in secure facilities and access to these storage facilities is not reliably logged or recorded, even though two individuals are required to be present when these facilities are accessed.  In many towns, each registrar could have individual, unsupervised access to the sealed ballots, and in many towns, several other individuals have such access.  The lack of uniform security of the ballots diminishes confidence in the integrity of the ballots which are the basis for the data reported in an audit.

We emphasize that this report does not question any individual’s integrity.  However, a safe, credible system of security procedures would not enable a single individual to have any extended opportunity to access ballots unobserved.

C. Procedures Are Not Being Followed, Understood

The Secretary of the State’s Office continues to publish incrementally improved audit procedures for each election, often basing those improvements on suggestions from Coalition members.  However, they are frequently not followed, are not enforced, and, as noted previously, may not be enforceable. Additionally, the procedures still lack detailed guidance in efficient methods of counting that provide accurate and observable results. See Section D below.

In early 2010 the Secretary of the State’s Office initiated a joint effort between representatives of their office, the Registrars of Voters Association of Connecticut (ROVAC), Coalition representatives and others.  Unfortunately, after two meetings, extensive review, and extensive recommendations, the Secretary of the State’s Office, due to time constraints, was only able to make a few changes to the existing procedures. We applaud the motivation for the initiative and would like it to reach full fruition.

Our observations indicate that some towns do a good job of using the procedures in the audit, following each step in order, and enhancing them with effective detailed counting methods.  However, in other towns, there is no evidence that election officials are referencing or following the procedures.   Some who attempt to follow the steps do not seem to understand them and appear to be reading the procedures for the first time at the start of the session.

Problems uncovered in this observation include: notification issues, incorrectly completed forms,  chain-of-custody problems, transparency, and actions contrary to procedures and the law.

Notification to Selected Towns and to the Public

Although we recognize an improvement in notification of towns by the Secretary of the State’s Office, some towns reported they had not been officially notified of their selection for audit for several days after the random district selection.

In past observations we have noted improvements by election officials in providing advance notice of the audit schedule, informing the Secretary of the State’s Office of that schedule, and, in turn, improvements in that office informing the Coalition. However, things went differently this time:

This year the audits were particularly challenging for the Coalition, our observers, and election officials.  In this election the law mandated that the audits be completed two days prior to certification, specifying that all municipal counting must all be completed between November 17th and 22nd, where in previous large scale audits there was often a period of two to three weeks for the counting. In addition, the districts for audit were chosen just two days prior to the start of the counting period, on November 15, 2010, while in past elections the selection has occurred six or seven days prior to the audit period.

Several audits occurred on November 17th and 18th which violated the Secretary of the State’s procedures which require three business days notice to the public and the Secretary of the State’s office.

Incorrectly Completed Forms and Incomplete Audit Counting

Reviewing the seventy (70) district reports submitted to the Secretary of the State, we note that:

  • Six (6) reporting forms were not accurately completed.  Without complete information, it is difficult to create comprehensive statistics or to depend on the audits as a vehicle for assessing the voting machines’ accuracy and correct programming.
    • Two (2) towns did not fill in the appropriate columns on the form.
    • Two (2) towns did not provide overall ballot count totals counted as part of the audit, as required, or filled in obviously incorrect numbers for the overall count.
    • Two (2) towns filled in multi-page reports for the same district with different ballot counts on each page, including one with obvious counting errors.
    • Three (3) towns did not fill in all columns on the reports, e.g. they supplied tape counts and vote totals not separated between undisputed and questionable votes.
  • Four (4)  towns demonstrated a lack of understanding of questionable votes
    • One (1) town reported negative questionable votes to balance their numbers.
    • Three (3) towns explained discrepancies by calling them “questionable ballots.” However, two of these listed no questionable ballots and the other listed very few compared to the thirty (30) they referenced in their explanation.
    • Several towns did not fill in the column for questionable votes. In those towns,  we assumed they found none.
  • One (1) town reported higher vote counts than ballots, stating that some of the ballots might have been read twice.  This same town also reported one number for unknown votes, not associating them with either of the cross-endorsed candidates in the race.
  • One (1) town attributed differences to an error in their hashmark sheets.
  • Twelve (12) towns explained differences by either “Hand count errors,” “Human errors” or similar vague language.  This is an increase from none in August 2010 and almost equal to the thirteen (13) towns using this vague explanation in November 2009.

Selected quotes from official audit report forms and our commentary in brackets:

Questionable Ballots were read by the scanner differently. As to what the Audit workers read them as being. [But they reported no questionable votes on the form]

3 Questionable votes – voter intent was for Jepsen [No questionable votes listed on report for any candidate]

These differences are extremely small and any discrepancies are well within the expected margin of human error [Biggest difference is 9 vs. 6. We are unaware of any recognized/established level of expected human error.]

The attributed vote difference(s) …can be attributed to disputed ballots not being processed by the optical scanner OR human error in the manual counting of ballots. [Short 10 ballots in hand count]

Speculation that difference between hand count and tabulator tape was inclusion of several ballots from auxiliary bin [We question that hypothesis, since it would be difficult for 5 extra ballots to produce less votes for several candidates than the tape]

We put aside 30 very questionable ballots…Apparently some were read by the machine, some were not [But none listed on the form as questionable.  Several vote count differences]

All differences can be accounted for by the questionable ballots [But they only list 3 on the form whereas there are differences up to 12.]

Given the large number of ballots in the district, and the number of teams that were working in the audit, we believe the differences are entirely related to human counting error. In addition, our tally sheets had a misalignment of name with tally column which may have lead some tallies being put in the wrong spot. [We would expect them to fix their forms and count again.] This is the most likely cause of the only “major” discrepancy, Dean vs. Jepsen [Dean off by 14 votes, Jepsen off by 8 votes. The number of ballots is also off by 12 and several other differences in votes are off by as many as 7]

11 off [one candidate for Gov] race because eventually they were counted by workers they were not sure it the machine actually counted it [but that would be an explanation, if it only happened in one race for one candidate but counts are off in several other races]

Counting errors. Not separated into questionable votes. Totals are close [off by up to 25 votes.  We do not classify that as close.]

Images of the actual official Audit Reports supplied from the Secretary of the State’s Office can be viewed at: https://ctelectionaudit.org/official-audit-reports/

C.1 A Really Questionable Audit In One Municipality

One municipality audited three districts. In all three districts, the town reported differences in votes.  In one of those districts, they reported 147 fewer ballots in the hand count and 11 fewer in another district, when compared to the machine count. Overall in the three districts in this municipality, the difference in the vote counts in the audit for Governor for Foley, Malloy, and Marsh were lower than those reported on election night by 34 votes (14%), 174 votes (23%) and 3 votes (28%), respectively.

Selected quotes from official audit report forms and our commentary in brackets:

Seventy-four (74) ballots counted by hand had no votes for offices of Governor, State Senate, or Attorney general [We do not understand how that would change any ballot totals downward and make a difference in the expected vote count]

There were a high number of write-in ballots on tape of one hundred ninety-one (191) for the office of Assembly District for State Representative [ Write-ins for State Rep have nothing to do with the races being audited, since votes on write-in ballots for other races are counted by the machine]

There were (15) fifteen write-ins on tape. There was (1) hand-counted ballot in the mix of valid ballots inserted into the tape [That might explain a portion of the ballot count difference, but the difference is 11 votes short in the hand count, not 15. Write-ins are supposed to be part of the audit, so if they did not count them, they should have.. If a hand-counted ballot was counted by the scanner, it could have been included in the hand count and would then not effect the results of the audit.  If it was not counted, it could explain a difference nor more than one in each race, while differences in this district ranged from 6 to 15.  We wonder what happened  on election night that one ballot was counted by hand and by scanner, and if it was not counted in the audit, how it was identified.]

C.2 Five Audit Municipal Audit Reports Not Submitted To Date

As of the date of this report, more than three months after the election, more than two months after the completion of the audit counting period, according to the Secretary of the State’s Office, five (5) municipalities have yet to submit the required audit reports of the counts in their audits. We have requested copies of these reports from the Secretary of the State’s Office several times, and our understanding is that they have repeatedly asked for those reports but have been unsuccessful. While there is no time requirement in the law for sending such reports to the Secretary, this too long to wait for voting integrity to be accomplished.

The Coalition has chosen to issue the report without those five towns in the interest of providing timely information to the public and the legislature.  When such information becomes available, if it significantly changes the results we will provide an addendum.

Multiple Chain of Custody Concerns

In several observations [1] , observers expressed concerns with the chain of custody in the several ways. Overall, in eight (8) municipalities, observers expressed overall concerns with the chain of custody.  In November 2oo9 eleven (11) observations expressed concerns and in August 2010 eight (8) observations expressed concerns.

Selected observer comments[2]:

The bags had been opened and resealed again with different seals and the sheet recording the original seals had been “lost” amidst the other papers and ballots in a large number of envelopes. When we arrived at the audit a good 25 minutes early, all seals had been broken and several workers were well along in the process of counting ballots into stacks of 25. When we left, the town vault was locked up and the registrars did not have a key, so the sealed bags of ballots were simply locked in an office overnight until the vault would be accessible the next day. The report with the original seal numbers were “lost” in among all the envelopes from other districts/races according to the registrars. They said they had had to open things and reseal with different seals because they thought they were going to do a recount of a close race in their district. [The Coalition does not understand how a predicted recanvass would be a reason for unsealing ballots; it would seem to be a reason to make sure ballots were sealed and preserved.]

Boxes of ballots were in the room when I arrived, but unopened. They had run out of bags and used sealed cardboard boxes, with seals taped to the box using seal tapes

Ballots were briefly left with just one person (or with observers only) in the room on more than one occasion

One seal bound paired zipper handles; one bound web carrying straps but not zipper

here is a photo…It is the bag of ballots which registrar delivered to the room. Note that seal was already broken. Only 1 registrar delivered the bag

Transparency

The Secretary of the State’s Audit Procedures state that observers should be allowed to view every aspect of the proceedings. Once again, we point out that the random selection of races is performed in a separate event from the audit and, unlike the counting session, the race drawing is not required by law to be public. However, a public drawing requirement appears in the Secretary of the State’s Post-Election Audit Procedures. We applaud the Secretary of the State for holding the race drawing publicly for this audit.

All aspects of the audit and as much as possible of the entire selection process should be transparent, open to the public, and publicized in advance in an easily accessed announcement.

One additional problem in the procedures and the law is that there is no formal public notification process when one of the audits is legally cancelled and an alternate selected to audit.  The Coalition and the public are often unable to discover when one audit is cancelled and another town notified to conduct an audit in time to observe the audit.

Overall, of thirty-nine (39) counting sessions observed, only two (2) observations noted concerns with transparency.

In one municipality, our observer noted:

There was no way to check the accuracy of the counts and no recounts were attempted when there were discrepancies discovered at the end when totals were added up. The supervisor referred frequently to the paragraph in the Audit Procedure Manual regarding observers being allowed to observe only, not have access to copies of the forms.

Yet the Secretary of the State’s Audit Procedure Manual says:

The State of Connecticut is committed to an open, public, and transparent process.  Public Act 07-194 specifically provides that the audit “shall be open to public inspection.”  This means that observers should be allowed to view every aspect of the proceedings, including being close enough:  (1) to actually see ballots as they are being counted; (2) to see tally sheets as they are being marked and when they are complete; (3) to see report forms to be sent to the Office of the Secretary of the State;…

In late January, after the November 2008 audit, and again after the November 2009 audit, there were post-audit investigations conducted by the Secretary of the State’s Office, recounting ballots in several towns where large discrepancies were reported or reports were incomplete. Those investigations were not announced publicly and not open to public observation. The transparency and confidence in the official state audit report would be enhanced if such investigations were announced and open to the public.

D. Guidance, Training, and Attention to Counting Procedures Inadequate, Inconsistently Followed

Audit Organization and Counting Procedures:

Observers expressed concerns that many of the audits were not well organized. Out of thirty-nine (39) audits observed ,the observers noted the following:

  • In eleven (11) audits, observers had concerns that the auditing was not well organized.
  • In seven (7) audits, observers had concerns with the integrity of the counting and totaling process.
  • In twelve (12) audits, observers had concerns that the manual count was inaccurate.
  • In twelve (12) audits, observers had concerns that the results on the reporting forms were inaccurate.
  • In eleven (11) audits with counts that did not originally match, the votes or ballots were not recounted a second time.

Need for Dual Verification

Observers noted that audit counting procedures requiring “two eyes,” i.e., dual verification of counts, were frequently ignored. When a large number of ballots are counted by a single individual, miscounts can require tiring recounts and unnecessary investigation. When single individuals count hundreds of ballots or votes, errors are almost inevitable.

  • When using the hash mark counting method, in twelve (12) observations a second official did not verify that votes were read accurately by the first official or that hash marks were recorded accurately.
  • When counting ballots, in seven (7) observations a second official did not verify ballot counts.

Blind Counting

Blind counting is a method of counting without pre-conceived knowledge of the expected outcome.  When counting teams know the tabulator totals or know the differences between their counts and the machine totals, there is a natural human tendency to make the hand count match the machine count.  This risks taking shortcuts and seeking cursory explanations for discrepancies which, in turn, lowers the credibility of the process and undermines confidence in the audit results.

  • In sixteen (16) observations, counters were aware of ballot or race counts from the election while they were counting.
  • In twenty-four (24) observations, when counts were off, counters were informed of the level of difference while they were recounting.

When election officials know the election totals or the differences between manual and machine counts, there is a tendency to accept any explanation or any new count that reduces the difference without an additional verification.

Some observers’ comments:

All in all, I thought the recount process was well-organized and thought out.  There were a lot of people on hand, and a lot of double-checking.  As a former accountant, I have no concerns with the process I observed; It could be used as a model for others

Two team members working independently, checking each other’s work ONLY if there was a discrepancy with the machine tally. This makes the assumption that the machine tallies are correct (or more correct than human counting). Observed incorrect grouping of batches, placing votes in the pile for the wrong candidate, counting votes without verifying that the vote was correct

We were given a copy of the moderator’s report from election day, but not the tapes. The numbers on the election day report do not appear to agree with those recorded on the audit report form, but we are not sure we are able to understand how these are recorded well enough to see whether the discrepancies are real

As part of the training registrar told the counters: “Ballot count should match the tape

One counter hadn’t arrived when the instructions were given. they did explain that the goal was to match machine ballots they counted to the totals from the machines. [The goal should be to count the ballots and votes accurately.]

One team used the stacking method. Another team flipped through the each batch once for each race/party counting aloud and recording the total. Both were looking at the race as one flipped through the ballots. The other two teams had one person calling all three races and the other hashing.

The number of ballots was originally off and the registrar announced that they were over by ten and that they were looking for where they were over by ten. When votes were counted the entire group discussed discrepancies

They recounted the team totals which resulted in improved, though not exact matches. They explained the differences as being due to human error. Since so few teams actually rechecked their counts I am sure they are right

Counters’ tallies were totaled after counters were excused. a lot of time was spent trying to account for discrepancies. I did not understand all the whispering and did not want to intrude with a lot of questions. At one point we heard a supervisor talk of “averaging”. I Don’t know why there were no questionable votes on the audit reports

There was no real attempt to have the reader or the hash-marker observed by a 2nd person. Theoretically, they could have checked on each other but that did not appear to be happening. There was no way to check the accuracy of the counts and no recounts were attempted when there were discrepancies discovered at the end when totals were added up

Some races & candidate totals matched and some did not. Some tallies were off by as much as 5 votes. No effort was made to reconcile these differences.

They made an effort to use sharp people, and they were, uniformly sharp.

Confusion in Definitions of Ballots with Questionable Votes

There continues to be confusion in the definitions of “ballots with questionable votes” (marks that the machine may have misread) and those ballots that should be considered “undisputed”:

  • On the official reporting form, some towns fail to classify any ballots as having any questionable votes.   Other towns classify many ballots as questionable, when clearly the machine counted the vast majority of those votes.
  • There is often confusion between differences in voters’ intent that would not be recognized by the scanner and marks that may or may not have been read by machine.
  • Observers report a wide variety of interpretations, counting methods, and classification methods.  In some towns counting ballots with questionable votes are left to individual teams; in others they are counted by the supervisors; in others there is a general discussion at the end of the counting where all officials agree that they saw enough questionable votes to explain all differences; often the frustration and uncertainty of questionable ballot counting leads to much confusion in the totaling of votes.

There is a need for further examples of questionable votes, clarification of ambiguities, and instructions on how to classify and count questionable votes in the procedures.

Some observers’ notes[3]:

The Registrar and counters discussed how far they were off. They talked  around the  conference table with counters until all agreed that all differences were based on questionable ballots they had seen during the counting (they had left them in the piles, one team had counted them along with other votes, and the other team had not counted them).  Just did not seem to get the purpose of classification of questionables and the need to judge them objectively.

Questionable votes were determined only when one race was determined to be questionable. Differences were resolved considering 12 questionable ballots and six write-in ballots that were included in the audit.

Questionables, left in stacks and discussed to justify as difference in the end.

Ballots initially deemed “questionable” were not separately tallied for Audit Report. Instead, votes were attributed to candidates and “Questionable” figure on Audit Report was calculated as the mathematical result needed to “balance” the Machine Totals

In one race, two questionable votes were kept aside until votes were totaled, then the two in question would be assigned “wherever they were needed ” to make the totals accurate. I never saw a supervisor decide how to count questionable votes…At the end of the audit, any discrepancies were attributed to human error, although there was discussion about machine error… in the discussion at the end the active supervisor said ballots with “pinprick marks” that were counted by the teams should have gone in the questionable pile. That might explain why one candidate was off by +10, while [unlikely to explain why] an opponent was off by -14 votes.

Counting Write-In Votes and Cross-Endorsed Candidates

Two years ago we noted a high degree of confusion and lack of training of counters in counting cross-endorsed candidates.  This year, as last year, we can report great improvement in this area.  This year we note no less accuracy in counting cross-endorsed candidate votes than those for other candidates.

However, we note a wide variety of classifying and counting methods.  Most towns report all votes for a candidate for each party and for the “unknown” category which is the most straight-forward way to check results with the scanner tapes. Many towns either count all votes together for all parties and others lump “unknown” votes with those for one party or the other. This is another area where additional standards, procedures, and training are required.

Ballots with write-in votes caused confusion in past audits. Some officials seem to lack an understanding of how write-in votes are counted by the scanner and how they should be counted by hand in the audit. In this audit we noted only one town attributing counting differences to write-in ballots.


[1] Although we observed a total of thirty (39) counting sessions, we did not observe every attribute of every audit:  some questions did not apply in some audits, observers could not fully observe audits that continued beyond one day etc.

[2] All comments by observers in this document have been edited for length, for grammar, and to make the meandings clear.