Categories
Technology

Zuck’s “data” dodge: it’s important.

Watching some of the highlights of Marc Zuckerberg’s testimony before Congress, I see lots of Senators asking him yes or no questions such as, “do you believe FaceBook users have a right to download or delete their data.” Zuckerberg’s response was an unequivocal “yes, Senator” in all cases. But when asked questions about allowing users to decide how data accrued on them could be used or corrected, Zuck began to backpedal and attempt to slip back into tech speak.

It’s really important to understand why he pulls short when asked about deleting or correcting erroneous data. One reason is that all the questions asked to that point were about the “user’s data,” which Zuck can very quickly and easily answer in ways that make the Senators happy.

Because those answers were already beaten out of FaceBook a decade ago. Then, the question was about copyright: FaceBook originally claimed copyright ownership over your photos and posts, a notion which was received with howls of condemnation at the time. The result was a change in FaceBook policy which carved out for itself a limited license for that kind of data.

All of which is to say no: FaceBook does not own your “data,” nor does it hold unlimited copyright to it. Yes, you already have a legal right to all of that information, including your posts, comments, likes, photos, uploads and the whole kit-and-caboodle.

But companies like Cambridge Analytica (and Coca-Cola. and Pepsi. and Sony) are really after is the metadata that is created by the pattern your data creates. The fact that you “like” Roseanne is a lot less important than the fact that you watch more FaceBook videos at 3pm than other times of day. You are available to be advertised too and influenced at those times.

Holding on to actual data about any one individual is a waste of server space, even if you think you might want an archive for some reason. What matters is the ability to observe behavior in real time. That’s why “meme” images with sloganesque sayings on them are so important: you can send one out that’s intended to seem racist and watch what happens.

How long does the average person look at that image? The average Republican? The average 4-year degree holder? The average cop? Does the length of time they look at an image correlate to likes and comments? Does it even need to?

None of this data is “yours.” It wouldn’t exist in digital form without FaceBook providing a platform and third-party businesses aggregating it into actionable insights. Which is why “correcting” data about you is so important and so difficult for Zuck to agree to: that would require that companies open up their data operations to allow you to see their assumptions of you.

Doing so would most likely be an infuriating experience for the end user and a nightmare for businesses. Which isn’t to say that they shouldn’t allow us to see what their assumptions are. But that’s what I think the line he’s going to try to skirt will be.

Categories
Cyber-Security Politics Technology

A Fundamental Question: what did the Russian hack look like?

Senator Al Franken was just on This Week with Martha Radditz, talking about the Russian hack into our elections and the Trump Campaign’s strange relationship with said. In discussing the hack itself, Franken pointed out that one means of hacking the system was to “mess with Google’s algorithms” to make reports from Russian state-controlled RT or Sputnik show up higher in the rankings.

What he’s referring to, if I understand him correctly, isn’t “messing with” Google’s search ranking algorithms directly. Instead, he’s referring to what most of us call “black hat search engine optimization“: the intentional manipulation of the way setearch engines work to get an illegitimate source to the top of the search results.

Regardless, it seems like our discussion of Russian hacking, collusion with US interests and the rest is greatly confused by not knowing what the “hack” actually was. Right now, we have an idea of Russian hacking, allegations of Trump Camp collusion, discoveries of conversations between principals in this story, potential perjury of our nation’s Attorney General… anything but hard fact on which to base a reasonable decision.

With any other type of crime, there’s a dead body, a missing item, a victim. There is physical proof that something happened, if not what happened or who did it. And for better or worse, our sense of the importance and severity of the crime reflects the physical proof of the act. Here, we have nothing.

It’s hard to imagine the American public continuing to be interested in this story long-term without a lot firmer proof of what went wrong in the first place. What exactly did the Russian hack of our elections look like?

Categories
Crime Technology

Body Cameras in Rochester. 5 questions yet to be answered.

Now that the City of Rochester has accepted a vendor for the body-worn cameras (BWC) City police officers will be wearing, they should now be fully-ensconced in the process of writing up policy documents for their full-time use. This is according to the City of Rochester’s timeline of events here. The City offers many “model policies” as envisioned by the ACLU and the International Association of Chiefs of Police, as well as actual policy docs from similarly-sized cities in the US.

But while we wait for the formal policy document to be announced, I’m left with 5 big questions about what I’m seeing.

#5. Who are the RPD’s policy-writing partners?

The timeline notes that throughout the months of January, February and March, “RPD is working with its partners, to include the Rochester Police Locust Club” to develop a policy document for BWC use in the field. The RPD and the Locust Club – cops and more cops – we know about. Who else? How will the concerns of the public, ably voiced in community input sessions, be represented in policy meetings?

#4. What makes video “evidentiary?”

Standard in the model policies is the clause that video should be kept only if the information contained in it is considered “evidentiary” to an ongoing investigation or trial. This makes sense. But the question is how “evidentiary” is defined, and in relation to what?

If a BWC is used in a traffic stop on East Ave while there is a simultaneous investigation of underage drinking on the same street? Does evidence get stockpiled in the name of the second investigation, at the expense of the people involved in the first?

#3. Under what circumstances can “non-evidentiary” video be reexamined?

The public comments make clear that local residents are OK with keeping video on file that is currently being used in an investigation or trial. They even seem to be OK with allowing non-evidentiary video to be kept on hand. The ACLU recommends allowing non-evidentiary video files to be kept up to 6 months. But why allow the video to remain at all, if there’s no immediate reason to find the video evidentiary? The obvious answer would be to reinvestigate the video in the event that some other crime might be solved with it.

By whose authority is that video reopened? Is a warrant required to reopen the archived video? Some other benchmark? In fact,

#2. Does a subject of a BWC video get notified of the video’s status?

We can guess that the answer to this is, ‘no.’ But that raises more questions. Are we all supposed to just believe that local police have disposed of video? Or can we be informed in keeping with our right to privacy? If there is a reason to keep a tape of a resident beyond the retention policy, that certainly seems like something they should be made aware of, yet doing so just as obviously could endanger important police investigations.

#1. Can policy ever match reality?

We invest a lot of faith in our institutions: it’s a cornerstone of a functional democracy. The effectiveness of local police is no less critically based on faith and trust – even if that trust is tested on a moment-by-moment basis. But a casual read of even the most conservative model policy on body worn cameras reads like a buffet of civil rights violation.

You don’t have to fear the “dirty cop,” the “rogue,” the “out of control sargeant” nor any other made-for-TV cop bad guy to understand that the models seem like a problem. A liberal reading of even the ACLU’s model policy could lead to perpetual video records of Rochester, one side to the other. Unless Rochester’s policy ends up being a lot more conservative than the models, how well or poorly body cameras are implemented is going to come down to trust.

And it’s trust that the body cameras are supposed to improve. A tall order.

Categories
Technology

RPD’s TASER use report foists old data on the media as news.

In the year 2015, its worth questioning documents that come to you electronically.. scanned from their original print version. Today, the Rochester Police Department released documents outlining both RPD policy with and the effective use of TASER Electronic Control Devices (colloquially: shock-the-shit-out-of-you Tasers). The header of the doc (scanned for reading here) shows a publish date of 2015. The documents inside, however, seem to have been produced some time in 2012 and cite 2011 data as current.

[scribd id=268046924 key=key-ZK2whpGb5G0ehNpYqjsE mode=scroll]

In the Executive Summary section, p. 4, pains are taken to demonstrate how little the RPD uses TASERs. It notes that only 8% of all “use of force” situations used the TASER and that less than 0.4% of RPD arrests involved them. Furthermore, it notes, Rochester’s Police Department only issues 18% of it’s total police force TASERs. This puts Rochester on the low end of NYS metropolises using TASER technology, according to p. 8.

Sidenote: what the fuck, Greece?

The problem is that this data is all 4 years old and the TASER program in Rochester is only 13 years old. If there were 0% of officers using TASERs in 2003 and 18% using them in 2011, are we to assume that now 29% of officers with TASERs? Because that seems to be the rate of growth, based on the data. That same page notes (see footnote) that the “current budget” in 2012 would have increased that number to 50%, do we know if that happened?

Every other statistic bulleted in this report or impressed upon the media when they released it is called into question by this fact alone. Sure, small numbers are small. But only if they stay that way. What changes about these numbers when we change – to say nothing of double – the scale?

This report is supposed to quell concerns in the media about the effectiveness of TASER equipment, but it leaves a lot to be desired, even if we overlook the antiquity of the data.

What stands out the most is that the bullet point the RPD wants to stress – that the TASER has been 89% effective in “use of force” situations – seems like the impressive way to say that the TASER has been ineffective another 11% of the time. By what benchmark were the other 11% ineffective? What was the next step in those situations?

Helpfully, the RPD also includes (p. 31) a “Use of Force Matrix,” which appears to be part of a training document showing the desired escalation of force by police officers in the field. As you might expect, the TASER appears near the very top of this matrix. Above that scale, officers are instructed that deadly force, “Impact Instruments” and the illustrative term “Groundcuffing” may be used.

But this just raises the question: if TASERs are only effective 89% of the time, are we given to understand that the ineffective cases graduate to lethal force? The report shows several cases of TASER use, but none of them deemed ineffective, so we don’t know based on the report.

I don’t think I’m looking too deeply into numbers and certainly, these are the sorts of questions that can be reasonably asked in a reasonable press conference and dispelled with reasonable answers. But how much of that will actually happen between now and when the story’s dead?

Categories
Social Media

Gates Police’s unhelpful Facebook video

Yesterday afternoon, Rachel Barnhart posted a video to her Facebook profile from the Gates Police Facebook profile, showing two dangerous recent interactions that Monroe County police have had with the public. One depicts a woman who already states to 911 that she wants the cops to shoot her. The second shows what appears to be a distraught and listless man shuffling around, not obeying police officers’ repeated commands to keep his hands out of his pocket.

Both of these situations are unquestionably dangerous. They’re two great examples of exactly the kinds of situations for which we rely on police. And had the video simply said, “this is what we do, thank you for your support,” it probably would have been a fine video. I would have applauded the outreach.

Sadly, the video did not stop there. It continued with the following statement:

The mentality around our country right now of no respect and challenging authority is the root cause of many of these violent encounters with Police.

This statement is troubling on a number of levels, the most obvious of which being that the mentally ill do not need a “mentality around our country right now,” to wish harm to themselves or others. If that was the message they intended to send to the public, they did themselves a disservice by not having shown actual instances of disrespect to law enforcement. Those instances cannot be rare in any age. Instead, they undercut their message with video that does not come close to fitting the situation.

The real problem, however, is that there is a real and legitimate debate happening “around our country right now.” The debate is about police procedure; the debate is about race and policing; the debate is about the militarization of our nation’s civilian police forces, as seen in Ferguson among other places; it continues, as ever, to be about the use of tasers as suppression tools.

The Gates Police video seems to want to jump into the middle of all that and just start throwing round-house punches. The blanket statement that there is a culture of disrespect seems to group everybody who objects to police procedures into the same camp with the two mentally disturbed people in the video. Somehow, a legitimate and perennial socio-political debate about how a free people chose to police themselves becomes a nation of lunatics, clawing at the walls of their cages.

To be fair, there is probably no one at the Gates Police Department who is a skilled activist, marketer or even PR person. Nor do I suppose there should be: we rely on the police to give us unvarnished truth, and resent the polished bullet points of larger metropolitan police statements. I’m sure that the message was a heartfelt one, if badly communicated.

Still, it may be impossible for those of us not directly connected to law enforcement to see these words as anything less than statist: we do a dangerous job, we protect you, so you shouldn’t question our methods. Doing so is disrespecting our good graces. The message seeks to end debate with an oversimplified generalization. It leaves no room for discussion, no quarter for anyone who quibbles with the details and displays no shred of self-reflection or awareness.

“Challenging authority” is not the same as not respecting it, and indeed, open public debate is the best route to building respect and trust. That, and acknowledging that the police are civilians, too. That there is no separation between the police and the policed.

As citizens of that same free state to which the rest of us belong, police officers have as much right to voice their opinion as anyone else. But when that opinion comes not from a single law enforcement officer, but an entire department, the effect is monolithic and antagonistic. I would like to hope that this was not the intention of the Gates Police Department. But it serves as a pretty good example of how bad messaging could, in perhaps less harmonious communities, begin a race to the bottom of our civic nature.

Categories
SECURITY Technology

Is Lovely Warren committing a crime by sharing her password? The Ninth District could decide soon.

When Mayor Lovely Warren’s office announced that her Facebook accounts had been “compromised,” they didn’t specify by whom. And we may never know, since they’re not really under any obligation to tell us. But one thing they made absolutely clear is that Lovely Warren’s Facebook accounts are in fact managed by an unspecified but large number of people who are sharing account credentials. That means that, if indeed the account was “compromised,” they didn’t really have any idea who compromised the account themselves.

This is hardly an unfamiliar or uncommon practice in office settings. Among the many and varied jobs I’ve done on my way to becoming a freelance web developer, I’ve done a fair amount of deskside support. And one thing that is universal at every level of deskside support is: everybody shares passwords.

I mean everybody. CEOs can never really be trusted to know their passwords – their assistants do. And if the assistant is out, do you think business stops? No. All those passwords are written down in her desk drawer for just such emergencies.

This habit repeats itself across industries, companies large and small. But what are the consequences of someone breaching security with a shared password? A case before the Ninth Circuit Court asks this very question. The Electronic Freedom Foundation filed an amicus brief in this case, the overview of which is explained in this EFF Article:

David Nosal worked for Korn/Ferry, an executive recruiting company. Korn/Ferry had a proprietary database of information that, under corporate policy, employees could only use for official Korn/Ferry business. After Nosal left to start his own recruiting company, the government claimed he violated the CFAA when he allegedly convinced other ex-employees of Korn/Ferry to access the database by using a current Korn/Ferry employee’s access credentials, with that employee’s knowledge and permission. The district court refused to dismiss the charges, ruling that the act of using someone else’s computer login credentials, even with their knowledge and permission, is a federal crime. Nosal was convicted by a jury, sentenced to one year in prison, and ordered to pay a $60,000 fine and nearly $830,000 to Korn/Ferry in restitution.

The government paints a pretty dire case, but even at face value, what is happening here is fundamentally no different than any CEO – or Mayor – sharing a password. One has an allegedly unethical intent; one has a drearily predictable, utilitarian intent. But both acts are functionally identical.

The government’s position on this makes every night shift help desk jockey the exact same common criminal as the Mayor of Rochester. Has Lovely Warren committed a crime?

As we can see in the Ninth Circuit case and in Lovely Warren’s most recent dust-up, authentication – the act of verifying you are who you say are – is a serious business. What, then, of the declared “compromiser” of Lovely Warren’s account? That member of her team or related party that used Lovely Warren’s credentials to access her account and rail against her detractor? When someone works against authentication and falsely identifies themselves, most of us would call that “hacking,” though the Mayor’s Office has so far avoided that term.

Cornell University’s Legal Information Institute documents the US code on fraud, and it seems to arguably describe what happened in Lovely Warren’s Facebook account, according to reports:

(a) Whoever, in a circumstance described in subsection (c) of this section—
(1) knowingly and without lawful authority produces an identification document, authentication feature, or a false identification document;
::snip::
(7) knowingly transfers, possesses, or uses, without lawful authority, a means of identification of another person with the intent to commit, or to aid or abet, or in connection with, any unlawful activity that constitutes a violation of Federal law, or that constitutes a felony under any applicable State or local law

Certainly, unlawful impersonation of a public figure must be a crime. It may even turn out that sharing passwords is illegal. If a crime as been committed, it behooves the Mayor and her newly-minted head of communications to provide some answers. It’s worth the conventional media in Rochester asking some real questions about this and not letting it go.

Was she hacked? Impersonated? Or did something else go on? And who will ask these questions, or does the whole story get swallowed up and forgotten in the Christmas holiday?

Categories
Rochester Technology

Dissecting a #fail: 7 questions about Lovely Warren’s “Stay In Your Own Lane.”

It seems a prominent politician’s Facebook account has been hacked, leading to an embarrassing series of screenshots going public. Sounds familiar, doesn’t it? Lovely Warren is in hot water, again. This time for allegedly sending out a scathing FU message to someone on Facebook – none of the reports are saying to whom the message was sent. The official response? Oh, man:

The mayor’s office says that there are several people who have access to Warren’s official and personal accounts, and she is working to see where the message in question came from.

Here is the portion of the conversation attributed to Lovely Warren’s account:

A portion of the conversation which has been attributed to Lovely Warren's account.
A portion of the conversation which has been attributed to Lovely Warren’s account.

She has since shut down both her personal and official accounts “until further notice.” So, let’s ask a few basic forensic questions.

7 Questions for Lovely Warren

  1. According to the screenshot, this appears to be a Private Message on Facebook. To whom was this message addressed?
  2. Let’s not assume anything. Do we even know that the offending message was sent from Lovely Warren’s account? Just because the Mayor’s Office says it is so? All that I see is a “chat head” with Warren’s picture on it?
  3. If indeed it was sent from a Lovely Warren account, from which account was this sent? Her personal account or the Fan Page?
  4. If it was her personal account, Facebook keeps a record of every IP address and login, including the “user agent,” or the software being used to access the account. Has this been checked? Or not?
  5. If it was her Fan Page, these types of accounts are not allowed to message someone directly unless they’ve been written a message by that fan first. Most Fan Page admins disable messaging primarily for this reason. Why was this option not disabled on Lovely Warren’s Fan Page?
  6. Fan Pages can also have multiple editors: any number of people can use the Page and post messages. Facebook has a good breakdown of which user roles can do what, and not all of them can send messages. Are all her editors administrators?
  7. Every editor’s activity can be logged, since they’re separate user accounts. Was none of this done with the Lovely Warren Fan Page? Was everybody just logging in as Warren to access her public page?

I could prattle on about the security aspects of this. Unsecured accounts and all that. Update: There are also legal questions, which I address here. How many more and how many mission critical accounts are sharing passwords? But really, this is just dumb, dumb, dumb social media flub for which the Mayor’s Office and Lovely Warren herself need some organized answers, soon.

Categories
Technology

Are you ready for Unsub Friday?

There I am at my computer, staring at my inbox. It sits there, right at the top of my overflowing list, staring at me.

“When the hell did I get on Aeropostale’s mailing list?”

Even if you rarely get that much spam email, from about now through Cyber Monday – and a good deal longer – you’re apt to get emails from all kinds of random stuff you never knew you signed up for. Half shit-faced and giving out your email address again, huh? It’ll cost you come Black Friday.

But why not make Black Friday – useless concept that it is in terms of sales – into something really useful: Unsub Friday!

Since every mailer you get is required by law to have a one-click unsubscribe link in it, this Friday and every day after it until Christmas is a beautiful opportunity to clean up your inbox for the coming year. Get rid of all that junk mail by just clicking Unsubscribe to everything that comes in… unless of course it’s the DFE Morning Briefing. Don’t throw the baby out with the bath water…

Categories
Technology

What can drones do for you?

They say the internet is comprised of seven layers. Is it only seven? Feels like seventy. Personally I’m starting to get exhausted from relaying my consciousness back and forth from datasphere to biosphere and back again.

The feeling is like driving through the same intersection 128 times a day. It’s tough to prioritize things in the modern world. Everything has a notification and every notification is important. Why else would they flash high intensity LEDs and vibrate in my pants?

Fortunately that’s where drones fits in. The family tree of technology is as follows: military, industrial, commercial, then the box store. What we need next would be an immersive interface to reduce the differences between where I am physically located and what output devices are feeding my sensorium. Something like virtual reality perhaps, which I hear will soon be coming to a Best Buy near you.

Put all this together and the next media platform has been born: simulated sensorium. With enough drones and enough remote sensors anything becomes my reality so long as someone has pushed the content. Commercial drones will establish a new grid of ad-hoc micro-networks ready to feed our media needs.

UAVs killed the Internet Star

The key word here is “drone” which implies a certain level of autonomy. Quite simply to use drones as a media platform we would need thousands of them deployed as ubiquitously as possible. That much air traffic would be impossible to manage by humans.

We would depend on their ability to identify elements of the world around them not only to negotiate physical obstacles but also be capable of relocating themselves to places of potential media relevance. For example a batch of drones may be linked to Twitter. When a certain hashtag begins to trend, such as #Ferguson, they relocate themselves for a better view of events.

Simultaneously they are forming ad-hoc networks with one another, supplying and demanding connection to the cloud. Of course some of these drones will be from the big media clearing houses like Reuters or AP. Others will come from more mainstream networks like CNN and FOX. And yet another batch will come from socially conscious aggregators with an interest in sharing their bandwidth to people on the scene via their drones.

People tuning in via telepresence will see real events in real time from dozens of possible locations as if they were there. In a case like Ferguson this means seeing the tear gas, hearing the protestors, maybe even joining the rally from your smartphone or tablet.

Or we could switch gears to a more personal view of things. The one on one interview with a celebrity or politician becomes a casual meeting for coffee.

When one drone washes the other.

Another key word to consider is “autonomy”. In the above model a topic crests the curvature of trending algorithms, which prompts certain behaviors in the local drone networks. With their built in autonomy, which is rudimentary from unit to unit, the swarms of drones begin to exhibit what is called emergent behavior. This is behavior whereby a group of individual units begin to behave independently as one.

Since these units are capable of basic problem solving with the world around them, why not with each other as well? 3D printers could allow modifications to be made to other drones, from repairs to the fabrication of additional tools. Given enough iterations, factor in a few other bleeding edge technologies like quantum processors and sooner or later the drones will invent a tool or process we’ve never seen before to solve problems we may not even be aware of.

But let’s get it out of the way…

The term “drone” is usually in the vicinity of words like “strike” and “surveillance”. What I’ve done is paint a rather idealistic portrayal that is the antithesis of what the media has compelled us to associate with the word “drone”. In fact talk to any drone enthusiast and you will see eyes roll at the mention of the term, preferring instead UAV (unmanned aerial vehicle).

Yes, they kill on command and spy on “the enemies of freedom”. Let’s talk about that for a moment. I won’t even attempt to justify or defend the use of any technology that is designed to kill, spy, or invade but most technologies have this unfortunate detail common to their development. I don’t defend that truth, in fact I openly criticize it.

But fact is we didn’t get cracking on the digital processor until England decided to do something about those Nazis everyone was worried about. And the first thing we did with that technology? Churchill allowed Germany to destroy Coventry to keep ENIAC’s existence a secret.

But maybe we can put the boiler plate cautionary tale rhetoric that follows any new technology aside for a moment.

Debunking privacy concerns.

Should you get to see a drone in operation the first thing you will note is they are loud, a crucial detail often left out of the press. Even the small ones that can fit in your hand are as noisy as you might imagine. Engineers who are constructing and designing UAVs don’t see that changing so long as drones are powered by rotating props.

Quite simply, making all that air move to create lift makes a lot of noise. That’s not to say we might not overcome that engineering difficulty but that would require a true paradigm shift in propulsion physics and aerodynamics combined. So for now should a drone decide to hover about your presence you will know it.

What about high altitude drones? Again, no dice. The cameras needed to capture ground details from high altitude are still quite bulky. For example a camera capable of a ground sampling distance of 4 inches per pixel at an altitude of several thousand feet (which isn’t enough to see a face or even a license plate) weighs quite a bit more than a small drone is capable of taking off with. You’re better off worrying about planes leaving chemtrails if conspiracies are your thing.

Quite simply drones as we imagine them used by big brother are not very effective for privacy invasion. Even if we could develop drones around these challenges why go through the hassle and expense of operating a fleet of small aircraft when the owners of smartphones willingly broadcast their profiles to the cloud several times a second?

Finally our urban infrastructure of concrete and steel plays havoc on the operation of small aircraft. Air currents get very strange around tall buildings. Additionally the surface of buildings, even wood and plaster ones, interfere with the radio signals GPS and WiFi are dependent upon. An Orwellian fleet of police drones (or my idealistic free media drones for that matter) would not be very effective unless we lowered our buildings to a more manageable height and rebuilt them with materials radio waves can permeate.

Now that I’ve got the fear mongering out of the way. What can drones do for you? As it turns out, EVERYTHING. Especially if you’re a member of the oldest profession. That’s right, farming.

Did I say Farming?

It is projected that 80% of all commercial drone use will be in agriculture.

Let’s say you’re an independent organic farmer actively participating in a sustainable crop and trading on the co-op level with other growers of a similar capacity. In this context drones make perfect sense for the small time operator.

A farmer could make quick and detailed observations of an entire field quickly and inexpensively. Drones could even be modified to aid with cultivation and harvesting of crops. Farming co-ops could collectively own and manage their own drones or small businesses could spring up offering the service.

A simple cost analysis between manned and unmanned aircraft reveals quite a bit of savings. This is a potentially huge breakthrough for the sustainability movement. Agriculture is what gave rise to the idea of information in the first place. It stands to reason the more data you have about your crop and your environment the better you’ll be at producing a better crop.

Affordable, reliable, accurate data collection.

Drones all come down to the collection of data. Sustainability through better data is a no-brainer. Everything about the contemporary modern landscape is facilitated by how much we know at any given second.

Our society is dependent on fast accurate information to make long term decisions about large systems. If we send drones into the field we can collect that information quickly and inexpensively. More importantly we can do it independent of large organizations with the equipment and manpower to dominate the field of agro-logistics.

At the end of the day we’re talking about bits. Data. What we know and what we’re capable of doing with that information. Our ability to acquire data about the physical world relies on the sensory input we feed to the cloud. That works both ways, for giants like Google, for Farmer Betty and her trio of farm hands, the co-op they work with and the community that eats the food. More so if small operators can independently collect the relevant part of what big data offers without sacrificing their bits to the cloud.

Project Morpheus
http://blog.us.playstation.com/2014/03/18/introducing-project-morpheus/

Ferguson on Twitter

https://twitter.com/search?q=%23ferguson&src=typd

Emergent Behavior
http://curiosity.discovery.com/question/emergent-behavior

USA Today: Drones in Agriculture
http://www.usatoday.com/story/money/business/2014/03/23/drones-agriculture-growth/6665561/

AUVSI Cost Analysis
http://www.auvsi.org/knowledgeatauvsi/communityresources/blogsmain/blogviewer/?BlogKey=47ec1760-d31e-4f6a-9c8d-78ad8643ae54

Categories
SECURITY Technology

What price security? Google signals that security will affect site’s ranking

In a blog post dated August 6th, Google’s head of Webmaster Trends Analysis, Gary Illyes announced that effective immediately, Google rankings will favour sites serving content from an HTTPS address. This form of communication is encrypted between the server and the client, and so discourages snooping by those with malicious intentions:

For these reasons, over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We’ve seen positive results, so we’re starting to use HTTPS as a ranking signal. For now it’s only a very lightweight signal—affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content—while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.

This all sounds pretty decent so far, right? Still, I’m not sure that it actually is a good thing, when you step back and look at the full picture. In the most positive light, it could be construed as an ineffective distraction to real security. In a more negative light, Google’s new tactic could be seen as strong-arming the Internet, to the detriment of low-income Internet properties.

What is HTTPS?

HTTP stands for HyperText Transfer Protocol, and is the vehicle by which the majority of what people think of as the Internet is delivered. If you look at the address bar for this website, you’ll see that the first few characters are http://. That tells the browser to use HTTP.

If the same traffic is encrypted, which means scrambled so as to be unreadable by anybody but the server and you, the first few characters will be “http*s*://.” The “s,” you see, is for “secure.”

It is fairly routine for your email, your bank and increasingly, your social networks to all be served up in this way. Encrypting your communications ensures some level of privacy from criminals, particularly encrypting the transmission of username/password challenges for logging in.

For the website in question, the price of admission to this secret world is what is known as an “SSL Certificate.” This is a set of secure data that only that server has, with which they encrypt the data they’ll be sharing with you. Basic SSL Certs with barebones support come in around $9 a year, which is a very affordable bar to entry for most Americans.

Now for the bad news

All of this sounds great, it really does. A more-secure website, especially one with usernames and logins, is a better one. But does that make one website a more authoritative voice or a better resource? Because that is what Google’s mission is supposed to be about, if we’re still concerned with that sort of thing.

Search is about content, not someone else’s priorities

If I wanted Google to make the decision for me where I “should” spend my time, as opposed to who has the content I’m looking for, I’d probably be asking for it. But that’s not why I use Google and that’s not why, as a publisher, I rely on Google’s rules to get my pages in front of your ocular tissues.

Where spam pages are concerned, Google is well within it’s mission to cull the herd. I don’t need to find myself in spam hell because I searched for a common term, nor do I want my site listed among the sleazy crop of Russian honey pots. But security is a personal matter about which I can make my own decisions.

Security is a state of mind

While we’re on the issue of the ambiguous term “security,” let’s keep in mind that, just because someone else can’t snoop your communications with a website, that in no way presupposes that visiting the site is “safe.” What’s to say the site itself isn’t doing dodgy things with your data? Google can’t guarantee that, nor should it try.

Wait. Google is talking secure communications, now?

Whether or not it was their fault; whether or not Google was pressured by the government to allow holes in their security that the NSA could snoop through, the fact remains that they did exactly that. To hear Google now carping about secure communications on the Internet is rich, to say the least.

Wait. SSL Certificates are secure, now?

Perhaps you recall, and perhaps you do not recall, the big security freak-out of a few months back? Heartbleed? Yeah, that whole thing. That’s when the world’s most affordable SSL Certificate system, OpenSSL, was found to have a gigantic hole in what was supposed to be it’s encryption.

No one with any knowledge of Internet security found it surprising that Heartbleed was discovered in the era of NSA snooping. It was exactly the kind of back-door intrusion loophole the NSA must have been employing. So now, Google wants us to trust certificates that they themselves helped undermine.

The “Google Tax”? $9 a year doesn’t sound like a lot to Middle Class America.

But any new cost of doing business matters, especially for those with lower incomes. And regardless of how much of a burden it is or is not, there is something counterproductive to the “free and open Internet” Google claims to want in requiring yet another fee to pay.

It seems to me that Google’s HTTPS plan is too disruptive in all the wrong ways, and not disruptive enough in the ways they would prefer it. I’m hoping this is another Google Wave-esque idea that goes the way of the dinosaur sooner rather than later.

Categories
Technology

Is Superintelligence a kinder, gentler Armageddon?

“First we build the tools then they build us.” -Marshall McLuhan

Following recent developments in the field of quantum mechanics the thought struck me that we might need to amend the IQ scale to allow for scientific notation. That’s because computers are about to get exponentially smarter and faster. When that happens we may be facing humanity’s latest end of the world scenario. It’s a kinder, gentler Armageddon suitable for the ergonomic smart-enabled iPhone wielding society we have come to enjoy being. No worries, Android users will receive the pastry themed update, eventually. This Armageddon has a name, that name is Superintelligence.

But let me start at the beginning.

The physical limitations of processing are a big deal in computer science. The upper limit of any contemporary computer is the speed of light minus any energy lost in the form of heat as electrons run along their circuits. Innovations such as reversible logic gates hold a lot of promise for a reduction in that energy loss. But what if we could do an end run around the laws of thermodynamics? Heck while we’re at it why not skip relativity too?

About a month ago you might have seen some people in the media talking about how the transporter beam was now a possibility. I’ll leave that for others to speculate on, but what the media got excited about were new breakthroughs in the field of quantum teleportation. Efforts by both the US Army and the Deflt University of Technology have demonstrated the ability to “teleport” photons from one place to another by means of quantum entanglement.

To outline this as simply as possible quantum entanglement is a phenomenon whereby quanta (in this case photons) become entangled with each other. In this condition the quanta share their quantum filed states. In effect what happens to one instantaneously happens to the other. If we engineer this phenomena for use in a computer we could theoretically exchange bits of data faster than light. So hit the road Newton and Einstein! We’ve got massive amounts of data to process and we can’t be bothered with thermodynamics or cosmological constants.

But what will the result of an FTL processor be? And how would we apply such fast computing? Well the first thing one needs to understand is just how much of a game changer this will be. It’s difficult to predict what the outcome of any technology is but I can speculate on how FTL computing might be applied to a modern computer.

Let’s say you have an FTL computer running a contemporary operating system like Windows or Macintosh. Normally you need to install updates and patches to maintain such a system. But if your computer runs faster than light maybe the computer could patch itself. Without the standard limitations of time the FTL processor could run through the permutations on all possible variations of the software, choose the optimal configuration, write the code, and install it all within a reasonable amount of time. Perhaps even instantaneously if the FTL computer is connected to others like it across the internet where other nodes are running infinite permutations of the same process. This network then becomes a collective of self-improving machines that grows exponentially, which may sound familiar to fans of George Takei.

The key words here are exponential self-improvement. As this trend swings into full force a point of critical mass will occur, a point of no return beyond which the application of data processing will far exceed the human capacity to manage by conventional means.

From here we enter the realm of speculation, just what does all this computing power mean? On the positive side we may be able apply this power towards data intensive subjects like medical research, sustainable energy, the economy, anything where big numbers rule or where countless variations need to be considered. But on the other hand a self improving intelligence is what keeps humans the dominant species. Will a superintelligent entity act in our best interest or will it reshape our environment to suit its program?

The programs we set for these machines are key. It’s not too difficult to imagine a scenario where we program humans out of existence, an idea Hollywood has capitalized on many times. But I suggest a more subtle outcome with hints of the perfect utopia all mixed up with the eventual erosion of our greatest human traits. We could cure cancer, balance energy production and restructure finances to guarantee everyone a high standard of living. At last all people would be free to peruse their dreams. Or would we?

With the need for constant computational balance in this false utopia wouldn’t we process ourselves out of purpose and choice? If the program says you go, you go, and do what ever function you are designed for. Yes, designed for. Because while we’re eradicating disease we’re also engineering perfect biological components for the larger structure of society.

Does human creativity figure into the superintelligent paradigm? If every problem has a solution what about the questions that sustain our humanity? Would curiosity and creativity become obsolete? Or worse rendered into mere distractions and indulgences for the perfect techno-agrarian society. If the absence of the human soul isn’t frightening enough just imagine the boredom.

So it becomes a question of limitation and application. If we press the button on superintelligence will we be ready to turn it towards the right problems? Will we be ready willing and able to shut it off when the time comes?

Categories
Technology

Facebook’s “Emotion Detector”: why doesn’t Cornell U take some of the heat?

By now, the story is everywhere: Facebook chose to edit it’s user’s timelines to experiment with whether predominantly good or predominantly bad news stories would affect their emotions. Not surprisingly, your friends’ funk spreads to you, even over the “innernets.”

But what’s got people really up in arms is that Facebook manipulated users’ feeds without telling them and for the express purpose of scientific experiment. That should upset people, for a lot of reasons. Not the least is: while it may be true that you’ve given your consent to have your data studied and manipulated for reasons other than you might intend, you didn’t give your consent to have your personal emotional state altered, which in this case is exactly what they did.

What is strange to me in all of this is that Facebook was not alone, yet they alone seem to be taking the blame. When first I heard of the story, more than two weeks ago, I heard it directly from the media arm of one of the universities that took part in the study, Cornell UniversityUniversity of California, San Francisco (UCSF) also took part in the Big Data study:

“People who had positive content experimentally reduced on their Facebook news feed, for one week, used more negative words in their status updates,” reports Jeff Hancock, professor of communication at Cornell’s College of Agriculture and Life Sciences and co-director of its Social Media Lab. “When news feed negativity was reduced, the opposite pattern occurred: Significantly more positive words were used in peoples’ status updates.”

The experiment is the first to suggest that emotions expressed via online social networks influence the moods of others, the researchers report in “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks,” published online June 2 in PNAS (Proceedings of the National Academy of Science) Social Science.

Facebook certainly has a lot to answer for. But this should also serve as a warning to would-be Big Data experimenters that Big Data affects little people. If the results of an experiment are spread out over several hundred thousand unwilling participants, that does not mean that the experiment is consequence free, nor should it be.

Update: someone much more familiar with scientific ethics standards and IRB’s (Institutional Review Boards) than I seems to be echoing my concerns. A key passage:

.. But while much of the uproar about Facebook’s inappropriate manipulation of human subjects has been  (appropriately!) directed at Kramer and his co-authors, missing from the commentary I’ve found on the Web thus far is any mention of the role of the (academic?) reviewers who read the manuscript and ultimately recommended it for publication by the National Academy of Sciences..  (Note: Forbes reports that researchers at Cornell passed on reviewing the final paper, although Cornell researchers did help design the study.)

Thanks go to reader @chelseamdo for the find.

Later Update: The Ithaca Voice finds reason to believe, based on a Mashable article, that the Cornell University study may have also received US Army backing. The Army undeniably funded another study by the same boffin, also concerned with shaping dialog on social. But Cornell denies that the Facebook study in question was funded in any way by any outside contributor.

While Professor Hancock, like many researchers, has conducted work funded by the federal government during his career, at no time did Professor Hancock or his postdoctoral associate Jamie Guillory request or receive outside funding to support their work on this PNAS paper. Initial wording in an article and press releases generated by Cornell University that indicated outside funding sources was an unfortunate error missed during the editorial review process. That error was corrected as soon as it was brought to our attention.