Monday, March 30, 2009

Search Warrants and the Attorney-Client Privilege

A recent case from Georgia illustrates how federal prosecutors deal with the task of preserving attorney-client privilege when a search warrant is executed at a law office.

The case is U.S. v. Sutton (U.S. District Court for the Middle District of Georgia 2009), and here are the facts that led to the search:
Berrien Sutton is an attorney in Homerville, Georgia at the firm of Sutton & Associates, P.C. (`S & A’). On July 17, 2008, Sutton was indicted for one count of honest services fraud conspiracy, several counts of mail fraud, and one count of conspiracy to commit mail fraud. The charges against Sutton followed a federal investigation into allegations of official corruption in the Alapaha Judicial Circuit in South Georgia. Among other things, the Indictment charges that Brooks E. Blitch, III, a former superior court judge in the Alapaha Circuit, created unnecessary government positions and appointed Sutton and his wife, Lisa, to those positions to help them get out of debt. In return, Blitch allegedly received free legal services from Berrien Sutton.
U.S. v. Sutton, supra.

In June, 2008, an FBI agent applied for a warrant to search the S&A office. When an officer applies for a search warrant, he/she submits an affidavit (a statement made under oath) explaining why he/she wants the warrant, why there is probable cause to justify the issuance of the warrant, where he/she wants to conduct the search and what, exactly, he/she wants to search for.

In the affidavit the agent submitted for the S&A office, he said “there was probable cause to believe that evidence of a conspiracy to commit mail fraud was located at S & A.; specifically, evidence that Blitch appointed Sutton and his wife to unnecessary government jobs in return for free legal services.” U.S. v. Sutton, supra. The information the agent would have included in the affidavit would have been much more detailed and specific than this statement implies; it’s just a summary.


The court found that the affidavit and the rest of the application justified the issue of the search warrant, and so gave the FBI agent his warrant. It authorized him and the agents assisting him to search for items that included the following:
(a) Documentation in whatever format pertaining to the representation of Blitch Ford, Brooks E. Blitch III, Margaret Peg Blitch, and/or Brett Blitch, by Sutton and/or Sutton and Associates, P.C. (SAP), including but not limited to contracts for services, payment records, letters or other memorandum outlining services to be performed and fee agreements.

(b) Records in whatever format pertaining to billing records for Blitch Ford, Brooks E. Blitch III, Margaret Peg Blitch, and/or Brett Blitch.

(c) Records in whatever format pertaining to contingency agreements with Blitch Ford, Brooks E. Blitch III, Margaret Peg Blitch, and/or Brett Blitch.

(d) Computer hardware, meaning any and all . . . electronic devices capable of data processing. . . .
U.S. v. Sutton, supra.

The agents executed the warrant and seized evidence. After he was indicted, Sutton moved to suppress the evidence they seized at the S&A office. We, though, are only concerned with part of that evidence: evidence implicating the attorney-client privilege.

That evidence seems to have fallen into two categories. The first consisted of a “printout of an email that Sutton sent to Withers, his defense attorney in this criminal case.” U.S. v. Sutton, supra. The email was clearly protected by the attorney-client privilege because it was a communication by a client (Sutton) to his lawyer (Withers) that apparently dealt with the scope of the representation (i.e., dealt with issues in the case against Sutton).

The other category consisted of “documents that related to Sutton's representation of the Blitches.” U.S. v. Sutton, supra. Those documents may also have been protected by the attorney-client privilege; they dealt with information pertaining to Sutton’s (a lawyer’s) representing the Blitches (his clients) on various legal matters. The privilege could apply to these documents even though they had nothing to do with the case against Sutton; as long as material relates to the substance of an attorney’s legitimate work on behalf of his clients, it’s protected by the privilege. (Material won’t be protected by the attorney-client privilege if the work the attorney was doing involved assisting the clients with committing future crimes, but that exception may or may not have applied here.)

Sutton argued that the use of the search warrant to find and seize the email printout and the documents pertaining to his representation of the Blitches violated the attorney-client privilege and therefore required the suppression of this evidence. The federal district court disagreed.

To understand why the court disagreed, we need to review the procedure the agents who executed the warrant used to prevent the privilege from being compromised. The U.S. Attorney’s Manual, which is a statement of policies federal prosecutors and federal agents are to implement, deals with law office searches. It says “[p]rocedures should be designed to ensure privileged materials are not improperly viewed, seized or retained during the course of the search.” U.S. Attorney’s Manual § 9-13.240. It also says that
[w]hile every effort should be made to avoid viewing privileged material, the search may require limited review of arguably privileged material to ascertain whether the material is covered by the warrant. Therefore, to protect the . . . privilege and to ensure that the investigation is not compromised by exposure to privileged material . . ., a "privilege team" should be designated, consisting of agents and lawyers not involved in the underlying investigation.
U.S. Attorney’s Manual § 9-13.240. That is exactly what the agents did in the Sutton case. The agent’s affidavit provided
for a `taint team’ procedure to protect . . . the attorney-client privilege. This . . . search would be conducted by a `privilege search team’ that consisted of . . . agents who had no previous involvement in the . . . investigation. The team would be assisted by a `privilege prosecutor’ with no knowledge of, or involvement in, the investigation. The privilege prosecutor would be responsible for answering any legal questions that arose during the search. The privilege search team would deliver all seized evidence to the privilege prosecutor, who would deliver to the prosecuting attorneys all non-privileged documents. All privileged documents and documents outside the scope of the warrant would be returned to S & A by the privilege prosecutor.
U.S. v. Sutton, supra. The district judge therefore denied Sutton’s motion to suppress because the investigators followed this procedure:
[T]he agents delivered directly to the privilege attorney all evidence seized from S & A. The privilege attorney kept the documents in sealed envelopes and reviewed them to determine whether they were privileged. At no time did any prosecuting attorneys view any privileged material. For these reasons, the Court finds that Sutton has failed to demonstrate that the search violated his attorney-client privilege.
U.S. v. Sutton, supra.

I don’t know how many computers and how much data they had to go through, but the Sutton case seems to be essentially a simple case, at least in terms of applying the taint team procedure. In 2002, Lynne Stewart, a criminal defense attorney in New York City, was the target of a terrorism investigation. She had represented Sheikh Abdel Rahman, who was convicted of being involved in the 1995 World Trade Center bombing; the government suspected Stewart was conspiring with the incarcerated Sheikh and was involved in other crimes, so it got a warrant to search her law office.

That search was especially complicated because her office was “part of a larger suite,” which she shared “with four other solo practitioners.” U.S. v. Stewart, 2002 WL 1300059 (U.S. District Court for the Southern District of New York 2002). The computers used by the 5 lawyers and their staff were all networked, and at least one seems to have been in common use. The attorney-client privilege issues were, therefore, very complex.

Federal agents executed the search warrant at the offices and seized lots of material, including computer data. The court noted that the data was particularly “likely to contain privileged materials.” The government intended to have a privilege team review the data to sort out what was privileged and what was not, but Stewart said that wasn’t enough. She asked the court to appoint a Special Master – an outsider – who would conduct the initial screening of the material. She argued that the special complexity and sensitivity of the search and the material seized required that, and the court agreed. U.S. v. Stewart, supra. I can only find a couple of reported cases in which courts have appointed Special Masters, but I think it can be particularly important when computer search warrants are executed at law offices (or doctors’ offices, too, for that matter).

Friday, March 27, 2009

iPhone Search Warrant

Last year I did a post on a case in which the court held that officers could search an arrestee’s Blackberry under the search incident exception to the 4th Amendment’s warrant requirement.

As I explained in that
post, the Supreme Court has held that it is reasonable (and all the 4th Amendment requires is reasonableness when it comes to searches and seizures) for an officer to search the person of an arrestee (pockets, purse, shoes, clothes) in order to find (i) any weapons he/she may have and/or (ii) any evidence he/she might be able to destroy if it isn’t located early in the process.

This post is about a slightly different procedure that was used for an iPhone. The procedure was used in the case of United States v. Lemke, 2008 WL 4999245 (U.S. District Court for the District of Minnesota 2008).

The case began when FBI agents trolled a “`hard core child pornography message board located in Japan, . . . the “Ranchi” message board” in an effort to identify U.S. citizens who were using the board to trade child pornography. U.S. v. Lemke, supra. I won’t go into all the details of the investigation; suffice it to say that the agents used a ploy to (allegedly) get Lemke to try to download an image of child pornography that they’d advertised on the Ranchi message board. When the agents reviewed the log file for the website they’d sent Lemke (and others) to, they harvested “`several hundred unique” IP addresses. U.S. v. Lemke, supra.

The agents traced several of the hits on the site to an IP address controlled by Charter Communications. After they served Charter with a subpoena, it identified Lemke as the “subscriber for that particular IP address” and gave the agents his home address. The agents confirmed that Lemke lived there, and discovered he had been investigated a couple of years before for allegedly molesting a 4-year-old girl. U.S. v. Lemke, supra.

With that and other information, they obtained a warrant to search Lemke’s house. They executed the warrant and seized a computer, along with other evidence. The forensic analysis of the computer turned up child pornography, including images and videos showing Lemke “engaged in sexual misconduct with three (3) different pre-pubescent girls”. U.S. v. Lemke, supra. The officers got two more search warrants, executed them, and seized a lot more evidence of varying kinds. U.S. v. Lemke.

At some point (it isn’t clear from the opinion), they arrested Lemke. When they arrested him, he was carrying a “black Apple iPhone.” U.S. v. Lemke. Instead of searching the iPhone pursuant to the search incident exception, the agents “secured [it] at the St. Cloud Police Department, pending a forensic analysis . . . to uncover the assigned telephone number, any stored telephone numbers, and any incoming or outgoing calls, text messages, photographs, or videos.” U.S. v. Lemke, supra.

A St. Cloud police officer, Jolene Thelen, then submitted an application for a warrant to search Lemke's iPhone. (State and federal agents often cooperate in this kind of case).
In addition to the information above, and in support of the . . . Warrant, which authorized a search of the contents of the . . . iPhone, Thelen attested that the Defendant had been arrested, and law enforcement had recovered a black Apple iPhone on his person. [She[ attested that the iPhone was secured in her desk at the police department, pending the issuance of a Search Warrant for forensic analysis of the data on the iPhone.

Thelen averred that, based upon her training and experience in law enforcement, she was aware child pornographers `commonly maintain a collection of child pornographic images in the privacy and security of their homes or some other secure location,”’ including `discs, hard drives, media storage devices, compact disc and thumb drives[.]’ Thelen further attested that child pornographers `rarely dispose of the collections’. . . . and to her belief, that a search of the . . . Defendant's iPhone, could reveal evidence of the Defendant's criminal conduct.
U.S. v. Lemke, supra.

The judge granted the warrant, they found evidence and Lemke moved to suppress the evidence the agents and officer found as a result of executing all the search warrants in his case. U.S. v. Lemke, supra.

Lemke’s motion to suppress the evidence found on his iPhone was based on one issue: He claimed that the information Officer Thelen provided in her application for the search warrant was not enough to establish probable cause to believe evidence of a crime (child pornography and evidence related to Lemke’s possessing and creating child pornography) would be found on the iPhone.

The federal district court judge found that the information she provided was enough to establish probable cause and justify the issuance of the search warrant. It noted, first, that Officer Thelen and other St. Cloud officers had assisted with the execution of one of the warrants at Lemke’s home, which resulted in her observing items she had seen in “sexually explicit photographs” seized from Lemke’s home. U.S. v. Lemke, supra. The court also considered the information summarized above, which Thelen had included in the application for the iPhone search warrant:
[A]fter considering the totality of the circumstances, and after applying a practical, common sense reading to Thelen's Affidavit, we find that the Judicial Officer, who issued the Warrant in question, [was] provided with substantially more than a reasonable likelihood to believe that a search of the Defendant's . . . iPhone, would reveal evidence relating to illegal child pornography. We find nothing to dispute the conclusion, that law enforcement had established. . . . a nexus between the Defendant and the iPhone which was seized from his person, during the course of his arrest.
U.S. v. Lemke, supra. The court therefore denied Lemke’s motion to suppress the evidence found on the iPhone.

Why did the investigators in the Lemke case get a search warrant for the iPhone instead of relying on the search incident exception? Since Lemke was carrying the iPhone when he was arrested, the exception would presumably apply under the analysis the court used in the search incident of a Blackberry case.

I don’t know why they got the warrant, but I can speculate. They may have not wanted to risk the possibility of a successful challenge if they’d gone with a search incident to arrest. As I noted in another post I did on search incident, defendants have been arguing that cellphones (like Blackberrys and laptop computers) are containers that are simply too complex to come within the scope of the search incident exception to the warrant requirement. So far, most courts have held that the exception applies, but some have not.

So given all that, I suspect a court would have upheld the validity of searching Lemke's iPhone pursuant to the search incident exception, but it might have gone the other way. So maybe they were just being extra-careful. Or maybe they just wanted to eliminate even the possibility of a credible argument based on their use of the search incident exception; they obviously had enough to get a search warrant, and they applied for 5 different warrants in the case, so maybe it was just as easy to apply for one more as to risk the search incident approach.

Wednesday, March 25, 2009

Clergy-Penitent Privilege

In Waters v. O’Connor, 209 Ariz. 380, 103 P.3d 292 (Court of Appeals of Arizona 2004), Korri Waters was prosecuted for sexual misconduct with a minor, a 16 year old boy.
During the pendency of her criminal case, Waters sent an e-mail to `Minister’ D.W., the volunteer music director at Church on the Word, a . . . non-denominational Christian church. D.W.'s title was honorific, given to her and others . . . to differentiate them from other church officers and as a sign of respect. Waters had been an active member of the church and had developed a close friendship with D.W. Over several years the two . . . discussed Waters' marriage and `things”’ that friends talk about.

In her e-mail Waters wrote she missed the church, asked forgiveness for the `choice’ she had made, explained she wanted her life back and stated she was `hungry to hear the word. . . .’ She asked . . . how to start over. . . . D.W. forwarded Waters' e-mail to the church's minister, Pastor D.M., and asked him what she should say. With guidance from Pastor D.M., D.W. answered Waters' e-mail and told her that . . .

[i]f you truly want deliverance in your life, total and complete deliverance, you have [to] come clean about what you did. Everything.

I want to help you. Your first step is to tell me exactly what you did, what's going on now, what your plan is for the future. As far as getting your life back, you don't want the life you had before. You need something better. A life that is solid and secure, without shame. It starts by telling the truth. The whole truth. The ball is in your court.’

Having been told by D.W. to tell her what she had done, Waters did exactly that. In a subsequent e-mail, Waters acknowledged her relationship with the minor, discussed its evolution and described it in graphic detail. D.W. forwarded this and other e-mails from Waters to Pastor D.M. who gave them to the minor's parents. The parents turned the e-mails over to the prosecutor in Waters' criminal case.
Waters v. O’Connor, supra.

The prosecutor wanted to call D.W. as a witness at Waters’ trial but Waters moved to bar her from testifying. She claimed D.W. “`had acted as a `person of the clergy’ and had provided her with religious counseling.” Waters v. O’Connor, supra. Waters said her communications with D.W. were privileged under Arizona Revised Statutes § 13-4062(3), which is the clergy-penitent privilege statute that applies in Arizona criminal proceedings. Section 13-4062(3) prohibits the examination of a “clergyman or priest, without consent of the person making the confession, as to any confession made to the clergyman or priest in his professional character in the course of discipline enjoined by the church to which the clergyman or priest belongs.” Waters v. O’Connor, supra.

The court held a hearing on Waters' motion. At the hearing, D.W. testified that she was not an ordained minister, did not receive confessions and referred questions regarding church doctrine to Pastor D.M. Waters v. O’Connor, supra. As the church's music director, she directed the choir, selected and arranged worship service music and occasionally delivered the “message” during worship services when Pastor D.M. was out of town. D.W. said that while she and Waters had been friends, Waters never before asked her for advice about “something like deliverance from sin.” D.W. also said this was the first time anyone had ever asked her for this type of advice. Waters v. O’Connor, supra.

Waters also testified. She said she believed D.W. was a minister and had confided in her as a minister, believing her e-mails would remain private -- except from Pastor D.M. -- because D.W. was a minister. She said that in the past she had confided in D.W. about her marriage and had sought her counsel as a minister. Waters v. O’Connor, supra.

The trial court denied Waters’ motion to bar D.W. from testifying because it found that D.W.’s position with the church was not that of a member of the “clergy.” Waters filed a motion asking the Arizona Court of Appeals to reverse the trial court’s decision, which the Court of Appeals agreed to do under its “special action” jurisdiction. An Arizona statute lets a court of appeals hear an issue in a case that has not gone to trial if it is an issue” of first impression, statewide significance” or a pure question of law. Waters v. O’Connor, supra. Since the Court of Appeals found that the issue Waters’ motion raised met all three conditions, it agreed to decide whether D.W. qualified as a member of the clergy for the purposes of applying the Arizona clergy-penitent privilege. Waters v. O’Connor, supra.

The court began its analysis of the issue by noting that the “privilege . . . belongs to the communicant: a clergyman may not disclose the communicant's confidences without the communicant's consent.” Waters v. O’Connor, supra. It also noted that the statute did not define “clergyman”. Waters argued that the clergy-penitent privilege should be
expansively defined, and should not be limited to formally ordained clergy. She argues that clergyman should be defined in a functional manner, and clerical status should be accorded to members of a religious organization who engage in functions akin to or customarily performed by members of the clergy. . . . [S]he contends that functionary and thus clerical status should be extended to individuals the communicant reasonably believes are acting as functionaries. Under her functionary equals clergyman definition, Waters' communications with D.W. would be privileged because Waters sought religious advice from D.W.; D.W. responded with spiritual counsel-just as a clergyman would; and Waters reasonably believed D.W. was, as befitting her honorific title, acting as a member of the clergy in providing that advice.
Waters v. O’Connor, supra.

Waters noted that a proposed (but not adopted) Federal Rule of Evidence would have defined clergyman as “`a minister, priest, rabbi, or other similar functionary of a religious organization, or an individual reasonably believed so to be by the person consulting him.’” Waters v. O’Connor, supra. After noting that various sources had criticized the expansive definition of clergy man in the proposed rule, the Arizona Court of Appeals held that the definition went too far because almost anyone in a
religious organization willing to offer what purports to be spiritual advice would qualify for clergy status. Such an expansive construction is contrary to how Arizona courts interpret privilege statutes. Generally, such statutes are to be restrictively interpreted. . . . because they impede the truth-finding function of the courts. . . . . Further, such an approach is not sufficiently linked to achieving the societal benefits justifying the existence of the clergy-penitent privilege
Waters v. O’Connor, supra.

The Court of Appeals explained that the clergy-penitent privilege exists “because of a belief that people should be encouraged to discuss their `flawed acts’ with individuals who, within the spiritual traditions and doctrines of their faith, are qualified and capable of encouraging the communicants to abandon and perhaps make amends for wrongful and destructive behavior.” Waters v. O’Connor, supra. It therefore held that the privilege should not be expanded to include communications with those
who are not qualified to provide such advice. As this case demonstrates, D.W.'s honorific title and activities in the church did not qualify her to render this type of counsel and encouragement or to even advise on issues of transcendent belief, repentance and forgiveness. Therefore, we decline to adopt Waters' functional test for determining the meaning of clergyman.
Waters v. O’Connor, supra.

The court also held, however, that the term clergyman “is not limited to members of religious organizations having an ordained clergy.” Waters v. O’Connor, supra. The court found that such a restrictive interpretation would violate the Establishment Clause of the Constitution (the clause that bars Congress from adopting laws that prefer one religion over others). The court decided that whether someone “is qualified to be a clergy member of a particular faith is . . . to be determined by the procedures and dictates of that person's faith.” Waters v. O’Connor, supra. “Thus, . . . we hold that whether a person is a clergyman of a particular religious organization should be determined by that organization's ecclesiastical rules, customs and laws. Such an approach avoids denominational favoritism and is consistent with the aims of the clergy-penitent privilege.” Waters v. O’Connor, supra.

The court therefore found that D.W.’s status did not qualify her as a clergyman under the Arizona statue, which meant that neither the emails she exchanged with Waters nor her testimony about those emails were barred by the state’s clergy-penitent privilege.

This is so far the only reported case I’ve seen involving email or any other aspect of cyberspace and an invocation of the clergy-penitent privilege. I can’t imagine that such claims will become common, but wouldn’t be surprised if the issue comes up, since it seems reasonable to assume that clergy will communicate with members of their congregations online.

And I wonder if there are online churches? If so, I assume all the communications would be via email, text, etc., so the privilege would necessarily come up in that context.

Sunday, March 22, 2009

Cyber-Monroe Doctrine?

I was recently involved in a discussion in which someone argued that a cyberspace equivalent of the Monroe Doctrine would be the best way to protect the U.S. from spam and various kinds of cyberattacks.

The Monroe Doctrine, in case you’ve forgotten (I was pretty fuzzy on it), is a policy President James Monroe announced on December 2, 1823. As Wikipedia explains, the Monroe Doctrine “said that further efforts by European governments to colonize land or interfere with states in the Americas would be viewed by the United States of America as acts of aggression requiring US intervention.” President Monroe claimed the United States “would not interfere in European wars or internal dealings, and in turn, expected Europe to stay out of the affairs of the New World.” Wikipedia.

So as I understand it, the proponent(s) of creating a new cyber-Monroe Doctrine argue that we should do something similar in cyberspace. I assume this means we would put other countries on notice that any attempt to interfere with our “sphere of influence” in cyberspace (whatever that is) will be treated as an “act of aggression” and responded to as such.

Actually, as I understand the proposal, it has two parts: One is that if we find ourselves under attack (cybercrime, cyberterrorism, cyberwarfare, a fusion of any/all of them), we will seal our cyberborders and go into garrison state mode for at least as long as that cyber-emergency exists. The other part of the proposal seems to be more analogous to the original Monroe Doctrine; it seems to contemplate that we will declare “United States cyberspace” to be our exclusive sphere of influence and will regard any attempt to erode or otherwise interfere with that sphere of influence as . . . what? . . . an act of cyberwar, I guess.

Before we go any further, I have to put you on notice: I am not a fan of this proposal. As I’ll explain in a minute, it reminds me of a similar proposal outlined several years ago; I think both are flawed conceptually and empirically. And aside from anything else, I don’t think an early nineteenth century solution can be an appropriate analogy for dealing with a world that has drastically changed in so many respects.

The earlier, similar proposal came from Professor Joel R. Reidenberg in his article, States and Internet Enforcement, 1 University of Ottawa Law & Technology Journal 213 (2003-2004). Professor Reidenberg proposes nation-states use two devices to deal with cybercrime; and while he’s only analyzing measures to control cybercrime, his proposal could also apply with equal logic to cyberterrorism andor cyberwarfare.

The first device is “electronic borders.” By electronic borders, Professor Reidenberg means the kind of filtering some countries – like China and Saudi Arabia – use to control the online content that is accessible by people in their territory. He argues that when a nation-state establishes an electronic border, it in effect quarantines cybercriminals (and, by implication, cyberterrorists and cyberwarriors) so they cannot cause “harm” in that country.

So if the U.S. were to adopt this device as a way of dealing with cyberthreats, it would establish an electronic border that, I presume, would seal “our” part of cyberspace off from the parts of cyberspace that are accessible in other countries and presumably to other countries. We would essentially be transposing geography into cyberspace; that is, we would be trying to export the notion of the nation-state as a sovereign entity that controls, and is defined by, the physical territorial area it controls into the virtual world of cyberspace.


Not being technologically adept, I don’t know if how feasible it would be to implement Professor Reidenberg’s electronic border. For the sake of analysis, though, I’ll assume that it could be implemented as effectively as he postulates. As I’ve noted before, though, when it comes to cyberthreats (cybercrime, cyberterrorism and cyberwarfare), our focus needs to be not on technology, as such, but on people.

It’s my understanding that people in countries where Internet filtering is being used are bypassing the filtering with varying degrees of success; I suspect techniques for evading such filtering will only improve over time (as, of course, will the filtering techniques). I suspect that if we tried to rely on the electronic border strategy, we’d succeed in quarantining law-abiding people and entities (which, of course, is not our real goal) but have much less success in keeping out the bad guys.

So my first objection is that the use of an electronic-border-enforced-cyber-Monroe-Doctrine would isolate the country using the strategy (the U.S.), thereby depriving it of the substantial and evolving benefits of online communications and commerce while doing relatively little to prevent dedicated bad guys from successfully attacking us. Such a solution would no doubt reduce the incidence of certain types of cybercrime, such as spam. Personally, though, I’d prefer to deal with spam and have the benefit of a borderless cyberspace.


The other device Professor Reidenberg proposes is an electronic blockade, which is the mirror image of the electronic border strategy. The electronic border is designed to keep the bad guys “out” of a demarcated area in cyberspace (the U.S.’ sphere of influence, whatever that would be); the electronic blockade is designed to keep the bad guys inside the country (area of cyberspace) from which they operate. It’s the cyber equivalent of the naval blockades that countries historically used to bottle up pirate ships or the ships belonging to a nation with which the state implementing the blockade was at war.

As Professor Reidenberg notes, an electronic blockade, unlike an electronic border, is a hostile act. If the United States were to implement an electronic blockade that prevented any packets from being transmitted “out of” the territory of Country A, Country A would certainly regard that as a hostile act, something conceptually equivalent to bombing Pearl Harbor. Now, as I’ve noted here and elsewhere, it is not at all clear that such an electronic assault would justify the retaliatory use of armed force under the current laws of war, but Country A might not care if it was justified or not. Country A might retaliate in the real-world . . . or it might figure out some way to retaliate in the cyberworld (by, say, hiring mercenaries or getting another, non-blockaded country to help it out).

Professor Reidenberg, of course, was not proposing the use of either device as a way to enforce a twenty-first century cyber-Monroe Doctrine. He was proposing both devices as ways to improve law enforcement’s effectiveness against cybercriminals. While I agree that this is something we need to do, I do not think their either device would be particularly effective in that regard . . . and I certainly do not think either could be used to enforce the cyber-Monroe Doctrine we’re analyzing.

It seems to me isolationist solutions have no place in an increasingly linked world. I think electronic borders and blockades and a cyber-version of the Monroe Doctrine would be about as effective in keeping out cyber-intruders as was the Tokugawa shogunate’s attempt to keep foreigners out of Japan.

Friday, March 20, 2009

Robbery

We were talking about online theft in a class, and that led me to think about robbery. Specifically, I started thinking about whether it is possible to commit robbery online.

To understand why that may be an issue, you need to know a little bit about the law of theft: Centuries ago, English common law developed a crime called larceny. Larceny consisted of “taking another’s property from his possession without his consent, even though no force was used.” Lee v. State, 59 Md.App. 28, 474 A.2d 537 (Maryland Court of Appeals 1984). To constitute larceny, the person taking another’s property also had to do with the intent to steal, i.e., with the intention to deprive the rightful owner of the possession and use of his/her property.

As I explained in an earlier post, common law later developed a related crime called “larceny by trick.” It needed the new crime to deal with situations in which I persuade you to give me your property by lying to you (telling you you’re buying a bold mine, say). In that situation, I clearly get your property wrongfully, and I implicitly have the intent to steal (to keep the money for myself, thereby depriving you of its possession and use), but I didn’t take it from you without your consent. We still have the larceny by trick crime though, as I noted in an earlier post, we call it “fraud.” And as I’ve explained in other posts, you can commit fraud online; indeed, cyberspace makes it even easier to commit lots and lots of fraud.

You can clearly commit larceny – or, as we usually refer to it, theft – online. In 1994, Vladimir Levin, a Russian working as a computer programmer for AO Saturn, allegedly hacked into the accounts of Citigroup customers and transferred millions of dollars into accounts controlled by him and his cohorts. That kind of online activity is clearly theft: You’re taking the rightful owner’s property without his/her consent with the intent to steal it; we could arguably have an issue as to whether the property was taken from the owner’s possession, but that element isn’t particularly important in modern theft laws. It’s theft if you steal my car from a parking lot while I’m in a restaurant, for example. The Model Penal Code, which is a basic template of criminal laws, defines theft as unlawfully taking “movable property of another with purpose to deprive him thereof.” Model Penal Code § 223.2. The MPC doesn't require that the property have been taken "from" the owner; it simply requires that you take property you know belongs to someone else to deprive him/her of it (not, for example, to borrow it with the owner's permission.)

I also did a post on a kind of theft that can only occur online: non-zero-sum theft as opposed to the zero-sum theft in my example above. In zero-sum theft, the possession and use of the property is transferred entirely from the rightful owner to the thief, so the owner is completely deprived of the possession and use of the property; in non-zero-sum theft, the thief acquires some quantum of the possession and use of the property. As I noted in an earlier post, copying a company’s password file from one of its database is an example of non-zero-sum theft; the company still has the passwords, but the thief has them, too. And the fact the thief has them deprives the company of some quantum of the value of the passwords; they can still be used, but they can’t be used with any confidence that the accounts they control are secure.

That brings me back to robbery. Common law defined robbery as “the . . . forcible taking from the person of another any goods or money . . . by violence or putting in fear.” State v. Campbell, 41 Del. 342, 22 A.2d 390 (Delaware Supreme Court 1941). Robbery is an aggravated theft crime because it involves using force or violence or the threat of force or violence to induce someone to give up possession of his/her property. Robbery is also defined as larceny committed by violence of intimidation. The Model Penal Code defines it a little differently:
A person is guilty of robbery if, in the course of committing a theft, he:

(a) inflicts serious bodily injury upon another; or
(b) threatens another with or purposely puts him in fear of immediate serious bodily injury; or
(c) commits or threatens immediately to commit any felony of the first or second degree.
Model Penal Code § 222.1(1). The first two options under the Model Penal Code provision retain the common law requirements, but the Model Penal Code adds the third option – committing or threatening to commit a serious felony.

I can’t find any cases dealing with the third option, but the logic is that I can commit robbery by hurting you, by threatening to hurt you or by threatening to hurt someone (rape or murder) or something (arson) you care about. And if you’re wondering why the third option doesn’t constitute extortion (which is actually another kind of theft crime), here’s the reason: In robbery, I use the threat to commit a serious felony against you or something/someone you care about to force you to give me the money or property I’m after against your will; in extortion, the theory is, I use a similar threat, but you give me the property willingly (with your consent). You weigh the options and decide to give in, which makes it extortion not theft.

Let’s get back to robbery. I’m not sure that robbery in the traditional sense – the forcible taking of property from someone by using or threatening violence – can be committed online. As to the first option, I can’t think of a scenario – using current technology – in which you could inflict bodily injury on someone to get them to hand over their property. Since I don’t see how you can inflict serious bodily injury in the online context, I also don’t see how you can use the threat of such injury to commit robbery (at least not, again, using current technology . . . all this might be possible someday).

That brings us to the third, Model Penal Code option: Using the threat to commit a serious felony (e.g., rape, murder, arson) to coerce someone into handing over their property. I can certainly see that part of the dynamic would work online: I can send you a threat to burn your business or home, say, to coerce you to give me your money. The problem I have with taking this scenario to the next step, which is where it would become robbery, is the “immediately” requirement. Remember, robbery consists of using force or the threat of force to take the property or money from the person at that moment; it’s not intended to be a long, drawn-out process, which is what it seems to me it has to be online.

So far, anyway, I’m pretty sure that robbery in the strict legal sense cannot be committed online . . . .

Wednesday, March 18, 2009

Juror Misconduct and Technology

This post is about the that effect technology is having on what is called juror misconduct. As I explain below, juror misconduct is the term that is used to refer to actions by jurors that are at least arguably inconsistent with their role in a criminal trial.

The U.S. Constitution and the Sixth Amendment to the U.S. Constitution create a right to trial by jury in criminal cases.

Article III of the Constitution says, in part, that the “Trial of all Crimes . . . shall be by Jury”. The Sixth Amendment says, also in part, that "In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the state and district wherein the crime shall have been committed."

The Seventh Amendment to the Constitution creates a right to jury trial in civil cases, but we’re only concerned with criminal cases.


The Sixth Amendment right to trial by an impartial jury not only means that the jurors are not supposed to be biased, i.e., not supposed to have a reason to lean either toward the prosecution or defense. It also means that they’re to play a very passive role in the trial.

As you’ve no doubt seen on TV and in movies (and maybe in real-life if you’ve ever sat in on a trial), the jurors sit and listen to what goes on until both sides have put on all of their evidence and each side has made its closing argument to the jurors. Before the closing argument, the trial judge will “instruct” the jury on the law; that is, the judge will summarize for the jury the basic law governing their functions (e.g., presumption that the defendant is innocent, prosecution must prove guilty beyond a reasonable doubt) and the specific law governing the crime(s) with which the defendant is charged.

So really what trial jurors do during trial is listen: They listen to the judge, they listen to the lawyers, they listen to the witnesses, and then they go off on their own (retire) to decide whether the prosecution has proven its case or not. Every once in a while, a juror or set of jurors will decide they should take a more active role in the case by, say, conducting their own investigation or experiments or using a dictionary to look up terms they think the judge didn’t define well enough.

The jurors may think they’re doing the right thing by trying to be as meticulous as possible, but in fact they’re violating the requirement that they be impartial. As I noted, the requirement of impartiality means that the jury ONLY decides the case on the basis of evidence that was introduced in court. As a Pennsylvania court explained,
[w]hen jurors conduct their own experiments . . ., the result is the introduction of facts that have not been subject to the rules of evidence or to cross-examination by either party. This is broadly defined as juror misconduct. However, when allegations of misconduct arise, it is the responsibility of the trial court to determine if the misconduct resulted in a reasonable possibility of prejudice.
Pratt v. St. Christopher’s Hospital, 581 Pa. 524, 866 A.2d 313 (Pennsylvania Supreme Court 2000). This was a civil case, but the same principle applies in criminal cases.

Technology has made juror misconduct a more common issue than it used to be. It used to be that to conduct their own investigations jurors had to go to the crime scene or to the library or otherwise take affirmative action in the real-world outside the courtroom. Now, though, they can use the Internet to look up information.

In a Massachusetts case, for example, the defendants were charged with trafficking in cocaine. Commonwealth v. Rodriguez, 63 Mass.App.Ct. 660, 828 N.E.2d 556 (Massachusetts Court of Appeals 2005). After the both sides had produced all their evidence and made their closing arguments, the judge instructed the jury on the law and they began deliberating. After a while the foreperson sent a note to the judge saying it looked like they were deadlocked; the judge told them to keep going, but after a while the foreperson sent out another note. The judge again told them to keep going.

After a while, the foreperson sent the judge a note that said, in part, “`“Your Honor, we have come up against a wall under Chapter 234, Section 26(b)’” of the Massachusetts statutes. Commonwealth v. Rodriguez, supra. That surprised the judge because she hadn't instructed the jurors on that statute; she therefore needed to find out what was going on. The judge told the prosecution and defense lawyers what had happened, and held a hearing on the issue. The judge asked the foreperson “how she had obtained that citation” to the statute. The foreperson said Juror 14 was responsible. The judge brought Juror 14 to the courtroom, asked what had happened and
the juror explained that he had found the citation to the statute when he went `looking on the [I]nternet to see what laws were about jurors and jury duty and how it deals with a hung jury possibly.’ The juror assured the judge he had found only that one statute. He further explained that he had brought it in with him that morning and the jurors `brought it up’ while they were in the jury room. The judge informed the juror that she did not want to know about deliberations and allowed the juror to return to the jury room, but told him not to discuss their conversation.
Commonwealth v. Rodriguez, supra. The defendants moved for a mistrial based on what Juror 14 had done, but the court denied the motion. The defense’s concern was probably that the statute Juror 14 found might have encouraged the jury to convict when it would otherwise not have done so.

The defendants raised the issue on appeal, claiming the trial court should have ordered a mistrial and given them a new chance for an impartial jury. The Massachusetts Court of Appeals agreed with them. It noted that the statute in question
addresses impaneling, sequestering,and discharging jurors and is therefore procedural in nature. As such, it bears no direct relationship to the evidence admitted at trial, to any of the live substantive issues, or to the elements of the offense with which the defendants were charged.

That being said, it was researched by one juror concerned about the deadlock, and its introduction into the deliberations signaled that the jury may have been trying to remove a dissenting juror. The jury's uninstructed consideration of the statute reinforces our conclusion that the verdicts cannot stand.
Commonwealth v. Rodriguez, supra. The Court of Appeals therefore reversed the convictions and set aside the verdicts.

On a less serious note, I’ve found a few cases involving claims of juror misconduct based on texting jurors. In People v. Fulgham, 2008 WL 4147562 (California Court of Appeals 2008), a defendant who was convicted of possessing a controlled substance appealed, arguing in part that his conviction should be reversed because one of the jurors “committed misconduct by text messaging through the trial . . . instead of listening to the testimony.”

Two days into Fulgham’s trial, a juror told the court’s bailiff that Juror 10 was text-messaging during the trial. The bailiff told the court and the next morning the judge brought Juror 10 into court and questioned her outside the presence of the other jurors:

`THE COURT: Ma‘am, . . . it's been brought to the Court's attention . . . that during the course of this trial . . . you had been texting messages on your cell phone?

JUROR [10]: Oh, yeah, I did text message one time. Are we not allowed to do that?

THE COURT: Ma‘am, you are required to pay attention to everything that goes on in here without any distraction of any kind.

JUROR [10]: Okay.

THE COURT: Further, I hope that you have not been texting any type of message in terms of what's been going on with this trial.

JUROR [10]: Oh, no, I haven't. I know that we're not supposed to talk about anything with anyone, so.

THE COURT: All right.
The judge asked the prosecutor and defense attorney if they wanted to question Juror 10, but both declined. On appeal, the Court of Appeals held that the defense attorney’s failure to question the juror “waived any claim of error in the trial court's investigation and decision to retain Juror No. 10.” People v. Fulgham, supra. The court also upheld the judge’s keeping Juror 10 on the jury because she told the “trial court she only sent a text message one time and that it was unrelated to the trial.” People v. Fulgham, supra.

These are simple juror-misconduct-involving-technology cases. Things were much more complicated in the very high profile case of U.S. v. Siegelman, 2007 WL 1821291 (U.S. District Court for the Middle District of Alabama 2007). After Siegelman was convicted, the defense filed a motion with the court asking it to conduct an investigation into the
authenticity of Exhibits 10, 11, 12, 13, and 15 and . . . 23, 24, and 26. As part of that investigation, Defendants would have this Court review data contained on the computer hard drives of computers used by the . . . . Siegelman requests this Court to order Juror 7 and Juror 40 to produce all hard drives, Blackberries, cell phones, or any other device capable of sending email or text messages that they used during the course of this trial. . . . [and] order Juror 7 and Juror 40 to disclose to the Court all internet service providers, email providers, and cell phone companies that provided email, text messaging or cell phone services to them during the trial.
U.S. v. Siegelman, supra. The federal judge denied the motion both because it said it had already conducted an extensive investigation into the jurors’ conduct in the case and because
[n]o court has ever held that a court's obligation to investigate extends that far. The Court does not believe the absence of such legal holdings has anything to do with the law having failed to develop as quickly as technology has evolved. The Court believes sound policy considerations are the reason that the law has not required such fishing expeditions. Such a holding would potentially destroy the jury system in this nation.
U.S. v. Siegelman, supra.

Monday, March 16, 2009

More Absurdity

Last fall, I did a post about a case in which a prosecutor refused to attach images of child pornography to an application for a search warrant because he said he was afraid he’d be prosecuted for “distributing child pornography” if he did. The U.S. Court of Appeals for the Seventh Circuit said this was a very strange thing for the prosecutor to do, since he would be attaching the images to an application being filed with the court as part of the official process of seeking a warrant to search for evidence of a crime. U.S. v. Griesbach, 549 F.3d 654 (7th Cir. 2008).

That prosecutor’s behavior may seem strange, but in a sense it’s the logical outcome of a view that has become increasingly popular and that, as I noted in the post I did last fall, resulted in Congress’ adopting the Adam Walsh Child Protection and Safety Act of 2006, Public Law No. 109-248 § 504, which went into effect on July 27, 2006. As I explained in another post I did last fall, the Act was codified as 18 U.S. Code § 3509(m).

Section 3509(m) says that in federal cases, the court is to deny “any request by the defendant to . . . copy . . . any . . . material that constitutes child pornography”, even when the defense wants the material to have it examined by its own expert witnesses. As I noted in that post, this is important because virtual child pornography – computer generated child pornography – is not a crime. So if a defendant shows that what he or she possessed was child pornography that was created digitally, and did not involve the victimization of real children, then he/she should be acquitted on all charges.

This brings me to a recent case from Tennessee: State v. Allen, 2009 WL 348555 (Tennessee Court of Criminal Appeals 2009). The defendant in the case – Reĺicka Allen – was charged with possessing child pornography a computer technician discovered on Allen’s computer after he took it in to be repaired. State v. Allen, supra. The technician told the computer store manager what he’d found, the manager called the police and Mr. Allen was charged with possessing child pornography. State v. Allen, supra.

Allen filed a motion asking for a copy of the hard drive so that his expert could examine it. The prosecution refused, but offered to let the expert examine the hard drive at the Sheriff's Department. Allen then asked the court to compel the prosecution to give him a copy of the hard drive. The court held a hearing on issue, and Allen’s expert explained why he needed a copy of the hard drive:
Herbert Mack . . .described . . . the . . .programs and viruses by which material can be both deliberately and inadvertently downloaded into a computer and estimated it would take him approximately one week of intensive twelve-to fourteen-hour days to complete an examination of [the] hard drive. He testified he would probably require the assistance of support personnel from his office and, in addition, would need to consult regularly with counsel with respect to whether any sexually explicit files he found on the computer qualified as child pornography. He said that, given the large number of images allegedly contained on the computer, he would not be able to remember the specifics of the information without taking the computer hard drive from the sheriff's department.

Mack expressed concern about working from a `mirror image’ rather than the hard drive itself, testifying that the programs in existence did not create true mirror images:

A. . . . . If what you're going to give me is a mirror image, my concern there is that I'm not getting all of the data that's there.

Q. And why is that? If it's a mirror image wouldn't you just get everything that's in the mirror?

A. No, sir.

Q. Why not?

A. A mirror image is a misnomer, okay. The computer programs that you have right now, okay, are for the purpose of recovering good data. Okay. So if a file has been ordered damaged or erased it's not going to be on the image. . . .

Mack testified that the risk of transmitting inaccurate information was high if defense counsel was dependent upon Mack to tell [him] what he had seen on a . . . disk image. Mack stated that there was an increased risk of disclosing non-discoverable information because the State's expert would be able to determine what tools had been run on Defendant's computer hard drive and what information had been recovered before Defendant was obligated to disclose its expert report. Mack also stated that Defendant would have no choice but to involuntarily disclose information that was not subject to discovery and that Defendant did not intend to use at trial.
State v. Allen, supra.

After the hearing, the trial court issued a protective order requiring the prosecution to give Allen’s expert a copy of the hard drive. The prosecution refused. Allen filed a motion to suppress the evidence, in effect as a sanction for the prosecution’s refusal to comply with the court’s order. State v. Allen, supra. Instead of granting the motion to suppress, the trial court issued a second order requiring the prosecution to give Allen’s expert a copy of the hard drive. State v. Allen, supra.

The prosecution appealed that order to the Tennessee Court of Criminal Appeals, which upheld what the trial court had done: “We find these orders reasonable and appropriate, especially given [Defendant's] computer expert's testimony with respect to the extensive and exhaustive work entailed in his examination of [Defendant's] computer hard drive. Accordingly, we conclude that the trial courts did not err in granting Defendant['s] motion to compel the production of the evidence.” State v. Butler and Allen, 2005 WL 735080 (Tennessee Court of Criminal Appeals 2005).

The prosecution still refused to comply, so Allen filed another motion to suppress the evidence. The state filed a motion saying it could not comply without violating § 3509(m). State v. Allen, supra. The trial court denied the motion because it found that nothing in Tennessee law prevented it from ordering that Allen’s expert be given a copy of the hard drive. The prosecution filed another claiming § 3509(m) prevented it from complying with the order; Allen’s attorney filed a brief pointing out that other courts had found that § 3509(m) doesn’t bind state courts. (It’s a federal statute, after all.)

The state filed a motion asking the court to reconsider and at the hearing on that motion the prosecutor told the trial court he had contacted the local U.S. Attorney’s office and
`disclosed that a copy of the mirror image of the hard drive would be provided to defense counsel and their experts. The State informed the court that defense counsel, any defense expert, as well as court staff and others could be at risk of federal prosecution for possession of child pornography in violation of the Adam Walsh Act if the discovery material was turned over to Defendant.’
State v. Allen, supra. The prosecutor threatened the judge with § 3509(m) to try to get him to deny the request for a copy of the hard drive. After they had an exchange in which the prosecutor pretty much made that clear, the court ordered that the hard drive be suppressed because the defense counsel and expert were “totally chilled from being able to evaluate their own-evaluate the evidence against them.” State v. Allen, supra. The prosecutor then said the judge was effectively dismissing the charges and basically asked the judge to do so formally, so he could appeal the decision. The judge did.

On appeal, the Court of Criminal Appeals held that § 3509(m) “does not apply to proceedings in Tennessee state courts.” State v. Allen, supra. It also noted that it had
been unable to find a single state or federal criminal prosecution of defense counsel anywhere in the country based on counsel's possession of child pornography as part of a state's discovery procedures. We think the likelihood of federal prosecution of defense counsel in this case for possession of child pornography is remote at best and did not justify the suppression of evidence and dismissal of the prosecution of Defendant.
State v. Allen, supra. The Court of Criminal Appeals noted that while it understood the trial court’s frustration with the prosecutor’s “persistent refusal . . . to comply with court orders”, the court should have used its power to hold the prosecutor in contempt to deal with the problem. State v. Allen.

Needless to say, I think things are really getting out of hand when it comes to dealing with child pornography evidence.

Friday, March 13, 2009

Prescriptive rules

I’ve done a couple of posts on the “insider” issue: the problem of defining when someone who is authorized to access a computer system exceeds the permissible bounds of that access and therefore becomes subject to criminal liability.

As I explained in a post I did earlier this year, the problem arises because the crime these “insiders” are prosecuted for is called “exceeding authorized access.”


The problem, as I explained in that and other posts, comes in defining how the “insider” knew that he or she was exceeding the scope of their authorized access. It's a basic premise of criminal law that you can’t be prosecuted for a crime unless you intended to commit the crime (I purposely exceed my authorized access to my employer’s computer system) or at least knew you were committing the crime (I know I’m exceeding my authorized access to my employer’s computer system but I’m going to do it anyway). In other words, you must have been put on notice as to what is permitted and what is not when it comes to using that computer system.

The problem criminal law has had with this crime is one of line-drawing. You have a trusted employee who’s authorized to use the computer system for certain purposes, like an IRS customer service representative who’s authorized to use the system to look up information (tax return filings, refunds, etc.) in order to answer questions from the taxpayers who contact the office. Assume the IRS agent uses the system to look up friends, his fiance’s father and a number of other people; that use is, as a matter of common sense, completely out of bounds. The IRS agent is, in effect, off on a virtual frolic and detour. “Frolic and detour” is a term the law uses to refer to the situation in which an employee briefly abandons carrying out his employer’s business to run an errand or do something else personal; a delivery driver who makes a detour to visit his girlfriend would be an example of frolic and detour.

So in my hypothetical, we all know as a matter of common sense that the IRS agent went on a virtual frolic and detour and, in so doing, exceeded the bounds of his authorized access to the IRS system. But common sense won’t work for the law; the law has to be able to draw a reasonably clear line. So the law has to be able to define what “exceeded authorized access” means with enough precision to put people on notice as to what they can, and cannot, do.

The problem, as I’ve noted before, is that it can be really difficult to do that in practice. I did a post earlier this year about a corporate Vice President who used his employer’s computer system to collect information the VP could use when he went out on his own. As I noted in my post, the court held that the VP did not exceed authorized access to the system because he was allowed to use it to look up the information at issue.

Some, as I may have noted, think the solution to the problem of defining the crime of exceeding authorized access lies in code; they say employers should simply use code to lock people into permissible use zones. If you’re somehow able to get around the limits on your permissible use zones, the efforts you made to do so would inferentially establish your intent, i.e., you knew you were exceeding authorized access and intended to do just that. (And if you weren’t able to get around the limits, there’d be no exceeding authorized access, which I think is the real point.)

The other theory is the contract theory, which I’ve written about before. It’s the one I was referring to above, when I talked about employer policies that tell you what you can and cannot do. The problem with that – as I wrote in a post earlier this year – is that it can be very difficult to come up with workable policies that do this.

I did a post earlier this year suggesting an alternative approach: making it a crime to misuse authorized access instead of exceeding authorized access. I still like that idea but I have a student who’s doing an independent study on this general issue, and as we were discussing the problem last week, I came up with another approach. I’m going to outline that approach and I’d be interested in any comments you might have on it.

We were talking about the central problem in defining the crime of exceeding authorized access: drawing a clear line between what is and is not permissible. My problem with that approach is that it essentially relies on prescriptive rules.

Law uses two kinds of rules: prescriptive rules (do this) and proscriptive rules (don’t do that). Prescriptive rules tend to be civil in nature; we have lots of regulatory rules that are prescriptive rules. Proscriptive rules tend to be criminal in nature; the structure of a statute that defines a crime is prohibiting certain behavior and/or certain results, such as causing the death of a human being. The distinction between the two types of rules isn’t perfect; categories sometimes blur in law, for various reasons (such as the facts that it’s concerned with practical matters and legislators sometimes are not masters of statutory construction). But it exists, and it’s a good conceptual model for thinking about the exceeding authorized access problem.

I see the contract-line-drawing approach to the problem as relying on prescriptive rules. In this approach, it’s basically up to the employer to develop rules that prescribe what the employee can do and stay within the scope of his or her authorized access to the employer’s computer system. This approach, in other words, puts the risk of error on the employer; if the employer doesn’t get the policy exactly right, it leaves some play, some room, for employees to exploit the computer system in greater or lesser ways for their own purposes. I’m not, of course, saying it’s impossible to develop policies that can define the scope of authorized access with some precision; I’m simply saying I think it can be very difficult, especially with regard to certain types of employment.

That brings me to the alternative approach I came up with when my student and I were discussing this last week. The alternative approach is to put the risk on the employee, not the employer. How could we do that and how would it help solve the problem?

The way we could do that is to make exceeding authorized access a crime – just as we currently do – but alter the way we define the crime. As I noted above and as I’ve noted in other posts, the problem we’re having with defining the crime of exceeding authorized access is the issue of intent. If we can’t draw precise lines between what is and what is not forbidden, then the law did not clearly forbid at least certain types of conduct, which means the person who engages in that conduct cannot be prosecuted because we can’t show that they intended to and/or knew they were exceeding authorized access.

We could address that by making exceeding authorized access a strict liability crime. As Wikipedia explains, strict liability crimes do not require the prosecution to prove intent; all the prosecution has to prove is that the person engaged in the prohibited conduct (i.e., exceeded authorized access). Strict liability crimes put the risk on the person because if they do what’s forbidden, they have no excuse; they can’t say, “I didn’t mean to” or “I didn’t know.” That may seem harsh, and it can be. To mitigate the harshness of holding people criminally liable without requiring intent, the law uses a compromise: We can eliminate intent in a criminal statute but, in exchange, the penalties have to be small, usually just a fine.

Strict liability crimes evolved about a hundred years ago as a way to enforce rules that were being adopted to encourage businesses and others to follow certain standards. There’s a case, for example, in which the CEO of a grocery company was convicted of a strict liability crime after his company let food stored in a warehouse be contaminated by insects and other vermin. The CEO appealed his conviction to the U.S. Supreme Court, arguing that he shouldn’t be held liable because he didn’t know what was happening in the warehouse. (It was a big company with lots of warehouses.)

The Supreme Court upheld the conviction because the crime he was convicted of was what’s called a regulatory offense. Regulatory offenses don’t have individual victims; they’re intended to encourage people to abide by the law in ways that contribute to the greater social good (like ensuring that food isn’t contaminated). As the Supreme Court noted, he best way to go about doing that is to use strict liability; strict liability puts the risk of error on the person who’s responsible for seeing that a rule – a rule the purpose of which is to promote the greater social good – is enforced. If the rule is not enforced, then the person who’s responsible has no excuse; good intentions or a lack of good intentions isn’t relevant. All that matters is the result.

So as my student and I were talking about all this, I came up with the idea of creating an exceeding authorized access crime that’s a strict liability crime. How would we do that? Well, I’m not exactly sure. If we decided this was a good way to go, we’d have to figure out how to structure the crime. I suspect – though I’m not sure (and can be wrong) – that we could come up with a good general definition of what it means to exceed one’s authorized access to a system. We might phrase in terms of using the system only in a fashion appropriate for carrying out your assigned tasks, say, or something similar.

Or maybe it’s a stupid idea. Maybe it wouldn’t do anything to help achieve clarity in this area. I still like my misusing authorized access alternative. What I found (find) intriguing about this notion is the idea of putting the risk on the person who is in the position to exceed authorized access. I really don’t think the prescriptive rules (putting the risk on the employer) is a particularly viable option . . . but, again, I could be way off base.

Wednesday, March 11, 2009

Possession of Identity Theft Tools

Colorado has an unusual statute that makes it a crime to possess identity theft tools. I can’t find a statute like it in any other state.

This is what the statute says:

A person commits possession of identity theft tools if he or she possesses any tools, equipment, computer, computer network, scanner, printer, or other article adapted, designed, or commonly used for committing or facilitating the commission of the offense of identity theft . . . and intends to use the thing possessed, or knows that a person intends to use the thing possessed, in the
commission of the offense of identity theft.
Colorado Revised Statutes § 18-5-905(1).

Possession of identity theft tools is a felony. Colorado Revised Statutes § 18-5-905(2).

Why did Colorado adopt this provision in 2006? I can see the rationale for doing so, I think, but I also doubt the statute is constitutional.

Let’s start with why they adopted it. As I’ve noted before, every state makes it a crime to possess burglar’s tools, i.e., tools that are specially adapted for or commonly used to commit burglary. As I explained in an earlier post, the purpose of this crime is to let law enforcement officers step in and arrest someone they suspect of getting ready to commit burglary before they can actually break into a house or a building.

As I noted in that post, possession of burglar’s tools statutes define an attempt crime. If a police officer on patrol sees someone standing outside a jewelry store equipped with tools that could be used to break into the store, the officer can stop and check things out. If the evidence indicates that yes, the person was getting ready to break into the store then the officer can arrest him for attempted burglary.

The same premise applies if an officer finds someone getting ready to kill someone or kidnap someone or set a building on fire. As long as the evidence and the legitimate inferences from the evidence prove beyond a reasonable doubt that the person had embarked on a course of conduct that was intended to culminate in the commission of a crime (burglary, murder, etc.), they can be charged with and convicted of attempting to commit that crime.

The policy justification for criminalizing attempts is that it lets officers intervene to stop crimes, instead of having to wait until the person breaks into the store or commit murder. Criminalizing attempts – which are by definition incomplete crimes – is also justified on the grounds that the person’s conduct shows they are dangerous, i.e., are willing and eager to commit a crime.

So where does that leave us with the Colorado statue? As I said, I think it’s a specialized burglar’s tools statute. As such, it’s presumably based on the premise I noted above: By making the possession of identity theft tools a crime, this statute would let law enforcement officers arrest someone who has such tools and thereby stop them before they actually commit identity theft.
That seems reasonable, and I don’t have any problem with the rationale of the statute. The problem I have with it goes, as I noted earlier, to its constitutionality.

The U.S. Supreme Court has held that statutes are void for vagueness and therefore unconstitutional when the language of the statute is so unclear that people “of common intelligence must necessarily guess at [their] meaning and differ as to [their] application.” Connally v. General Construction Co., 269 U.S. 385 (1926). Vagueness is particularly objectionable when it comes to criminal statutes, for several reasons.

One reason why vagueness in criminal statues is particularly objectionable is that if you are convicted of violating a criminal statute, you’ll probably be punished with some very severe sanctions (fine, imprisonment, damage to reputation). It’s not fair to impose harsh sanctions on people if they couldn’t understand that something was prohibited.

Another, related reason derives from a basic principle of criminal law: ignorance of the law is no excuse. If I’m prosecuted for murder or theft, I can’t defend myself by saying “I didn’t know it was a crime to (kill people/steal stuff).” The principle that ignorance of the law is no excuse implicitly assumes that (i) the law exists and (ii) is knowable. In other words, it’s not enough just to adopt a statute that makes something a crime; the statute has to make it clear what is, and is not, being criminalized. It’s not fair to hold me liable for violating a statute that was so ambiguous or confusing that I couldn’t figure out what was being criminalized.

The third reason why vagueness is especially problematic when it comes to criminal statutes is that a vague criminal law can give rise to arbitrary and discriminatory enforcement. That is, such a law gives the people who are responsible for enforcing criminal law a lot of latitude to decide who they want to go after and who they don't. So unprincipled law enforcement officers and prosecutors can use such a statute against people they don't like or want to harass, and let everyone else go.

It looks to me like the Colorado possession of identity theft tools may well be void for vagueness. Vagueness is an issue that has been raised with regard to burglar’s tools, but possession of burglar’s tools statutes have been around long enough – and the kind of tools those statutes address are unambiguous enough – that vagueness really isn’t a viable argument in this context.

I don’t think that’s true here. I think the statute’s making it a crime to possess “any . . . computer, computer network, scanner, printer, or other article adapted . . . or commonly used for committing or facilitating the commission” of identity theft is unconstitutionally vague. How am I supposed to know whether the computer, scanner and printer I use are “commonly used for committing or facilitating” identity theft? What, in other words, makes my possession of these items a crime? How can I tell when my possession of a computer, scanner and printer is legal and when it’s a crime under the Colorado statute?

Maybe I’m missing something. Maybe there’s a class of computers, scanners and printers that are specifically adapted for identity theft and essentially have no other use. If that’s true, then maybe this statute is not unconstitutionally vague. I doubt it, though.


Monday, March 09, 2009

Can You Trust Your Car? - Part 2

A couple of years ago I did a post about a federal case in which the FBI used a car’s integrated telecommunications system to listen in on what people in the vehicle said without their knowing about it.

As I explained in that post, the opinion in that case had nothing to do with the people whose conversations the FBI eavesdropped on, courtesy of the car’s cellular phone system. Instead, it was a civil case: The manufacturer of the car involved was trying really hard not to have to cooperate with the FBI, for what I think are obvious reasons.

Think about it: If you knew the cellular phone system installed in your car as part of a system like OnStar could be used to listen in on your conversations, would you be keen on having a car with such a system? Even if you weren’t planning on using your car to plot criminal activity, you might still find the notion of having someone eavesdrop on you to be unsettling. After all, aren’t cars supposed to be private places?

I found another car eavesdropping case, one that involves the OnStar system (the federal case involved a different system) and deals with a motion to suppress brought by the object of the eavesdropping. The case is State v. Wilson, 2008 WL 2572696 (Court of Appeals of Ohio 2008) and here’s a summary of the facts:
In November or December of 2006, appellant, Gareth Wilson, purchased a used Chevrolet Tahoe equipped with the OnStar system. [He] declined OnStar services. On January 2, 2007, OnStar received an emergency button key press from the Tahoe, as the service had yet to be disabled. The OnStar employee did not receive a response, so the employee contacted the Fairfield County Sheriff's Office and requested emergency assistance be sent to the vehicle's location.

While monitoring the vehicle, the OnStar employee overheard the occupants of the vehicle discussing a possible illegal drug transaction. The employee permitted the Sheriff's dispatcher to listen to the conversation. The dispatcher contacted Deputy Shaun Meloy regarding the OnStar call. Deputy Meloy in turn notified Reynoldsburg Police Officer Joe Vincent who notified Officer James Triplett.

Officer Triplett effectuated a traffic stop of the Tahoe. As Officer Triplett approached the vehicle, he observed furtive movement from [Wilson], the driver. . . . Officer Triplett removed [Wilson] from the vehicle and conducted a search, whereupon marijuana was discovered.
State v. Wilson, supra.

Wilson was charged with trafficking in marijuana, a fourth degree felony under Ohio law. He filed two motions to suppress the marijuana arguing that it was “discovered as a result of a traffic stop predicated on a violation of Ohio's wiretapping and electronic surveillance law, thereby violating his rights against unreasonable search and seizures as protected by the Fourth Amendment to the United States Constitution.” State v. Wilson, supra.

As I explained in an earlier post, the U.S. Supreme Court held – in Katz v. U.S., 389 U.S. 347 (1967) – that we have a 4th Amendment expectation of privacy in the contents of our telephone calls. In other decisions, the Court has held that we have a 4th Amendment expectation of privacy in conversations we hold in private places – like our homes. The government’s surreptitiously listening in on phone calls is known as wiretapping and it’s surreptitiously eavesdropping on face-to-face conversations is known as bugging. Both states and the federal system have statutes that make wiretapping and bugging illegal; the statutes implement the 4th Amendment’s requirements in this respect. (They also, in certain respects, go beyond what the 4th Amendment requires because legislators have on occasion wanted to ensure that we have even more protection in this area.)

Wilson’s motions to suppress therefore raised three issues: Did OnStar eavesdropping violate his rights under the 4th Amendment? If not, did it violate his rights under Ohio’s wiretapping and bugging statute? Finally, was the traffic stop valid?

The Ohio Court of Appeals quickly disposed of the Fourth Amendment issue:
The Fourth Amendment is a restriction against governmental action only. The seizure by a private person is not prohibited by the Fourth Amendment. . . .[T]here is no evidence that any law enforcement officers aided the On Star representative in the monitoring of the conversation. Law enforcement's role was strictly passive in terms of listening to, but not providing the means or controlling the manner of the monitoring. Thus, the Court finds no governmental action in this case and therefore no Fourth Amendment violation.
State v. Wilson, supra.

The court then turned to the Ohio statute. Ohio Revised Code § 2933.52(A) makes it a fourth degree felony to “[i]ntercept . . .or procure another person to intercept . . . a wire, oral or electronic communication.” Since the OnStar system was used to listen in on what people were saying in the car, it didn’t constitute intercepting a wire or electronic communication (wiretapping a phone call or email); instead, it consisted of intercepting an oral communication (bugging a conversation). Wilson argued that what happened to him violated this provision.

The prosecution said what happened to Wilson didn’t violate § 2933.52(A) because it came within an exception created by the next section of the statute. Section 2933.52(B) states that it isn’t a violation of § 2933.52(A) for an “employee . . . of a provider of wire or electronic communication service . . . to intercept, disclose, or use that communication in the normal course of employment while engaged in an activity that is necessary to the rendition of service”. To support its argument, the prosecution offered a transcript of the conversation between the OnStar employee and the Sheriff’s Office dispatcher:
ON STAR OPERATOR: Hi. This is Edwina calling from OnStar Emergency Services. We just had an emergency key press from a vehicle. We're not getting any voice contact at all. They're located on Churchview Drive in Pickering (sic), Ohio. The closest cross street is Finch. The vehicle is at the top of the T at Finch and Churchview Drive. . . .

ON STAR OPERATOR: Great. The vehicle has -- they pressed the button. I cannot get anybody to respond to me whatsoever, so I don't know if it's empty or if somebody is just not able to respond. . . .

ON STAR OPERATOR: Thank you very much for holding. I do have a dispatcher back on line. I will be in the background.

(Inaudible conversation )

ON STAR OPERATOR: Quite an ear full, huh, Dispatch?

SHERIFF'S DISPATCHER: Right. We're monitoring Reynoldsburg Police right now.
State v. Wilson, supra. After the Sheriff’s Dispatcher mentioned the police, the Onstar employee “communicated the following to the vehicle: `ON STAR OPERATOR: This is Edwina with OnStar Emergency Services. Police have dispatched to your location at Spring Run and Reynoldsburg. I will be disconnecting. Please know we are here whenever you need us.’”

The prosecutor said the OnStar person listened in on what people were saying in the car and shared it with police as part of performing an activity necessary to OnStar service, i.e., ensuring the occupants of the car were safe. Wilson argued that the exception the prosecution was trying to invoke shouldn’t apply because he didn’t have a contract with OnStar. The court disagreed: “It is uncontested that someone other than the OnStar employee initiated the contact as the `panic button’ had been activated. Clearly the occupants of the vehicle initiated the contact and failed to respond to the OnStar employee.” State v. Wilson, supra. So Wilson lost on the second issue. . .

and on the third issue: The court found that the OnStar information gave the officer probable cause to stop the car to be sure everything was okay. It also found that the officer’s observing a “questionable license tag” on the car (I don’t know what that’s about) further supported his reasons for stopping it. So Wilson lost on his motions to suppress. His no contest plea on a lesser charge (which got him 60 days in jail and 5 years of community control) stand.

I’m still waiting for a real 4th Amendment challenge to the use of OnStar . . . a case like the federal case I wrote about before, in which law enforcement officers get the service to let them listen in on conversations in the vehicle. As I explained in that earlier post, it seems to me that should be a 4th Amendment violation . . . though I can also see the argument that you assumed the risk of being eavesdropped on by having the system in your car.