[Thesis 1] The (Generative!) Google ABC Book

abc

This project can be accessed here.

Idea
An ABC book based on Google’s Autocomplete. Suggestions for each letter are automatically retrieved and may vary depending on the service selected by the user — Images, Books, youtube, etc.
The layout is set so that users can print the page, cut, fold, and make a (physical) instant book.

Concept
We spend our lives engaged with data systems, though we see them only through graphical interfaces. Even so, we don’t know much about how they work and why they came to be this way. This project is part of a series of experiments that try to shed a light on this issue, making people think about their relationship with online sources of knowledge.
The idea of printing an ABC book out of Google’s Autocomplete suggestions creates an odd juxtaposition: though these 2 things are familiar, they’re not meant to be together. Even so, they both constitute relevant sources of knowledge today.

More information about the series can be found here.

Iterations
This work is a sequence of 2 previous projects:
* Google ABC
* Google Supersearch

The iterations of this particular one were as described below:
1. User input limited to the selection of service.
google_abc_3

2. Color effects on the pictures. User could choose foreground and background colors. This was an attempt to solve the visual contrast issues.
google_abc_4

3. Content edited as a step-by-step tutorial. Functionality to print added. Also added instructions on how to make the physical book.google_abc_6

 

Technology
The project runs as described below:

* User select service. A script runs through the 26 letters of the alphabet and retrieves autocomplete suggestions for each letter using a Google non-documented API. (See design brief #3 for details).
* By hitting ok to confirm, the user triggers the instant book layout. This one is achieved through a combination of css rules and javaScript.
* After the layout is complete, the page calls a web scraper, sending the suggestions stored in the first step — ‘Ariana Grande’ for A, ‘Beyonce’ for B, etc.
* The scraper runs on node.js, some node modules, and phantomjs. It loads a Google Images page searching for the term sent by the client — ‘Ariana Grande,’ for instance.
* The scraper stores the address of the first image result and send it back to the client.
* The image is appended to its position, above the initials.
* If ‘print’ is hit, a new html page is created and the html from the book is sent to it.

The code for this project is on Github.

[Thesis 1] Google’s Supersearch Elevator Pitch

This project allows users to compare Google’s Autocomplete suggestions from 7 different Google services: web, youtube, recipes, products, news, images, and books. As users type something on the search box, the page displays the suggestions as lists into separate boxes. The options are clickable and redirect to a Google search page with the results for the selected term. For example, clicking on ‘chicken dance’ under the youtube list redirects to an youtube search page with all the results for ‘chicken dance.’ The purpose is to demonstrate how little we know about simple features we use everyday — in this case, it shows that Google’s autocomplete suggestions are different for each Google website.

[Thesis 1] Google’s Supersearch – Evaluative Module Prototype

supersearch

This project retrieves information from Google’s autocomplete — more specifically, from 7 different Google services.

One thing that wasn’t much clear on the  previous prototype, is that autocomplete depends on the service you’re using — Youtube, Images, Web, etc. I found this interesting and built something that allow people to compare these suggestions in just one page.

By clicking on the links, users are redirected to the search results of that term.

[thesis 1] Interview #1 – Jer Thorp

Bio

Jer Thorp defines his works as ‘software-based’ and ‘data-focused.’ He is the co-founder of The Office for Creative Research, along with Ben Rubin and Mark Hansen, and teaches at NYU’s ITP Program.

Some of his works pre-OCR include the algorithm for the 9/11 Memorial and Project Cascade. The latter was developed by Jer as a Data Artist in Residence at the NY Times R&D Group.

Interview

I talked to Jer for about 40 minutes. I didn’t conduct a formal interview. Instead, we talked about some of my ideas for this project and things he was working on. I tried to expand this to a conversation about broader themes in the field as well — future directions for data visualization, the role of data art, cultural changes related to data, etc. That is why this transcription doesn’t follow a Q&A format.

On the idea of ‘direct visualization’ as a form of representation closer to the artifact (the thing itself)

That’s a philosophically very deep question. You can get closer to the measurement, but it’s very hard to get close to the thing. That requires you analysing not only the act of representation, but you have to also consider the act of measurement, you have to consider the intent of the measurement, it’s all built into it. In that case [Manovich’s film visualizations], I would consider the dataset this kind of analytics that’s running on the images. There’s a lot of decisions being made there.

On the limitations of algorithmic-based cultural analysis

There’s this myth that the computer and algorithms allow you some type of purity. That is not true at all. This analytics allow us to do things we’ve never been able to do before. But they also don’t mean that we can dispose of ethnographical and the old-fashioned ‘talk to people,’ and do some journalistic research and so on. [Cultural analytics] It’s a tool, and a very powerful one. But it needs to be paired with other pieces.

About 3d (sculptural) visualisations versus 2d, in terms of perception

Scale and perspective make a huge difference for everything. I’m skeptical about those studies though, because if you dig up the papers half of them were done with a group of about 20, almost all white, grad students. And then we build all of our decisions based on them.

For us [OCR], it’s all about communication, which is a different thing. David Carson has this famous quote: “don’t confuse legibility with communication.” Because most of the times in our projects our fundamental goal is not to allow people to see “this is 6.8 or 6.2,“ but instead to give them a way to interpret that, or to have a feeling about it, or to construct a narrative from there. Of course rigour is important, so we’re not gonna show things in a way that is misleading. But I don’t believe in best practices in that sense. What’s best practices for your magazine might not be best practices for another magazine. And what if I’m projecting on a wall on a building, or what if it’s a sculptural form? These are things that we have no rules for, which I think is why I like those things more.

About future directions for the data visualization field

We’ve been working a lot with performance, and trying to think what that means perform data. We’re doing a long residency with the Museum of Modern Art. It’s an algorithmic performance that we write the scripts for actors that perform them in a gallery. It’s very much like traditional theatre in a way, but the content is generated using this data techniques we developed. That’s pretty interesting to me.

I think sculpture, data in a physical form it’s still in its infancy. Most of the works we’ve seen in the past 6 years look like something you just pulled out of the screen and plotted on a table. A lot of them look like a 3d renders, because of the 3d printers. There’s a ton of possibilities that haven’t been explored. We think about shape a lot in sculpture, but we don’t think a lot about material, its relation to the body or to a room, architecture, design, temperature… There’s a ton of room to do interesting things in that department.

On the reasons for a large number of recent data-related projects

Something about this movement has to do with something that’s happening in culture. It’s about and around this kind of data-based transformation that’s happening in the world. So it’s less about how it’s being done and more about why. It’s about the NSA, radical changes in transparency, wearable sensors, all those things coming together in this really big cultural change.

I’m definitely skeptical about the advertising slogans that have been used to promote this stuff, but I’m optimistic about its potential. Last year was really interesting, because for the first time we started to have real conversations about the exclusionary nature of big data. What does data mean for underprivileged communities? What decisions have been made based on a completely white North-American frame of thought? And how can we do better? That to me is really exciting.

The work we do here and the work that are a lot of people are doing is fundamentally about trying to push a cultural change on how we think about data. And that is gonna take a long time, it’s probably gonna require a lot more than a 9-person studio pushing against it, but what we need is a generation of people who understand data and collectively can make decisions.

Most people don’t even have a good understanding of what is data. And it’s fundamentally easy to talk about data as measurements of something. If you want to be more accurate, you can say it is records of measurement of something. And it’s important to include ‘measurement’ in it because it is a human act. So it is a human artefact. We can program machines to do it, but they’re still doing it based on our decisions — until A.I., strong A.I. development there’s no data that is not fundamentally human data.

Take-aways

This interview was made in a previous phase of my project, when my main questions were about representation — how to get close to the thing, or the artifact. It had some impact on my later decisions, leading me to turn my focus to the process instead of the result. In other words, if every data visualization process is based on decisions and implies lossy, is it possible to make it transparent? Can data visualization lead to a better understanding of data itself?

[printmaking] Pomodoro

Since January, I’ve been tracking my extra-class activity at Parsons and also the time I spend on freelance projects. I don’t use any software, just a spreadsheet:

date

class

project

activity

start

pomodoros

10/26/2014

printmaking

abc

designing

17:00

///

— where ‘pomodoro’ refers to the Pomodoro technique. Each slash here represents half an hour.

I haven’t done anything useful with data about Parsons so far — the one about freelance projects has helped me plan things better. So I wrote a openFrameworks sketch to visualize it, reading the data from Spring 2014:

pomodoro_1 pomodoro_2 pomodoro_3

The representations are far from being readable — the list at the top is much more useful. But my idea was to print it as an abstraction anyway.
I printed the second version and traced it using a ballpoint pen and a sharpie. I added some textures to get away from the original visualization a bit more.

IMG_3153 IMG_3154 IMG_3160_2 IMG_3160 IMG_3161 IMG_3162

I like the rice paper texture better (blue print), but the Speckletone paper definitely works best for lithography.

[Thesis 1] Design Brief #3

Thesis Question

In digital systems, we access information through databases and software. Graphical interfaces make this process easy and almost invisible. But also might lead to false assumptions and misconceptions derived from the lack of knowledge of the systems we engage with.

This project is concerned with the relation between users and data mediated by software. It will explore how the process of gathering information from databases is shaped by the algorithm that drives it. How much do we know about systems we use daily? How aware are we about possible biases in data retrieval? How can computerised systems be transparent and yet user-friendly?

Research

Databases

In this project, the term database defines any system of information, not restricted to a digital format. As researcher Christiane Paul points out, “it is essentially a structured collection of data that stands in the tradition of ‘data containers’ such as a book, a library, an archive, or Wunderkammer.” [@paul_database_2007]

Though analog and digital databases share this basic definition, the latter allows for multiple ways of retrieving and filtering the data it contains. To do so, it should be organised according to a data model, that is, the structural format of how the data is stored: hierarchical, relational, etc. Also, it should comprise the software that provides access to the data containers. At last, according to Paul’s definition, another component of digital databases are the users, “who add a further level in understanding the data as information.” [@paul_database_2007]

Data versus Information

This distinction between data and information is important when analysing the impact of algorithms in the data retrieval. Presenting a critical retrospective of data artworks, Mitchell Whitelaw defines data as “the raw material of information, its substrate”, whereas information “is the meaning derived from data in a particular context.” [@whitelaw_fcj-067_] In Whitelaw’s opinion, any rearrangement of the former is directed towards constructing the latter.

Whitelaw’s criticism is particularly directed towards works where this distinction is blurred. Writing about We Feel Fine [@we] and The Dumpster [@dumpster:], he states that “these works rely on a long chain of signification: (reality); blog; data harvesting; data analysis; visualisation; interface. Yet they maintain a strangely naive sense of unmediated presentation.” [@whitelaw_fcj-067_] Therefore, the mediation between user and database in these works lead to a misconception of data and information.

Search Boxes

Although Whitelaw writes about those issues in the context of data art, they can be extended to our everyday use of the web. As he writes in the introduction of his article, “in digital, networked culture, we spend our lives engaged with data systems. (…) The web is increasingly a set of interfaces to datasets.” [@whitelaw_fcj-067_] Search boxes constitute a particularly interesting interface in the context of this prototype. Their path to the data is not explicit, as opposed to a set of hyperlinks that might give users a notion of how the data is structured. Also, its seeming neutrality, usually a single html input element followed by a button, reveal very little of the search engine itself.

An exception for that is the autocomplete feature, that tries to predict users’ input before they finish typing it. By doing so, it retrieves subsets of the data and, as a consequence, part of the algorithm behind it.

Google’s Autocomplete

Writing about how users interact with search boxes, Steve Krug tells a story from user tests he performed: “My favorite example is the people (…) who will type a site’s entire URL in the Yahoo search box every time they want to go there (…). If you ask them about it, it becomes clear that some of them think that Yahoo is the Internet, and that this is the way you use it.” [@krug_dont_2005] Krug’s book was originally published in 2000. 14 years later, users might not make the URL mistake anymore and Google has by far surpassed Yahoo! in popularity — approximately 1 billion unique monthly visitors versus 300 millions. [@top] However, the notion of a search engine as the web seems to prevail.

In an article for The Atlantic about Google’s autocomplete history, Megan Garber says that by typing a search query “you get a textual snapshot of humanity’s collective psyche.” The same notion is present in a series of reports from the website Mashable.com, with headlines like: “The United States of America According to Google Autocomplete,” and “Google Autocomplete Proves Summer Is Melting Our Brains.” [@google-2] Though the headlines are often humorous, they convey an idea of Google’s autocomplete as an accurate representation of our collective behaviour.

Some reports about the feature are more critical, though. An article on The Guardian about the UN Women’s campaign based on Google’s autocomplete states that Google’s autocomplete “isn’t always an entirely accurate reflection of the collective psyche. (…) What’s more, autocomplete also serves as a worrying indicator of how, in the interests of efficiency, we’re gradually letting technology complete our thought processes for us.” [@googles-1]

UN Women ad campaign uses Google’s autocomplete to point out sexism

According to Google’s own support page, the suggestions are “automatically generated by an algorithm without any human involvement, based on a number of objective factors.” The sense of neutrality on this sentence contradicts the following one, where the algorithm “automatically detects and excludes a small set of search terms. But it’s designed to reflect the diversity of our users’ searches and content on the web.” Therefore, the system is automated, but not free from human decisions.

Precedent

Image Atlas

Image Atlas [@about] was created by Aaron Swartz and Taryn Simon during a hackathon promoted by arts organization Rhizome. It takes a search term as an input, translates it to 17 different languages, and perform searches on Google Images of 17 different countries using the translated term.

Presenting the project, Aaron said that “these sort of neutral tools like Facebook and Google and so on, which claim to present an almost unmediated view of the world, through statistics and algorithms and analyses, in fact are programmed and are programming us. So we wanted to find a way to visualize that, to expose some of the value judgments that get made.” [@aaron] In other words, Aaron and Taryn’s project was intended to provoke reflection about Google’s Image Search engine.

Project Concept

This prototype will utilize Google’s Autocomplete suggestions as a means to illuminate how the search engine works. It is not intended to make a moral judgement on the subject. Instead, its purpose is to lead users to question their own knowledge of the system.

The Autocomplete engine provides different results depending on the Google service being used. To provide comparison, this prototype will allow access to the suggestions from Google Web and Google Images.

The user input will be restricted to one character only. The reasons for this constrain are:

  • lower complexity; a reduced number of suggestions provide less and clearer comparisons.
  • emphasis on the system’s response, not the user input.

Apart from the input restriction, the prototype will use a familiar interface: a regular search box followed by a button. The prototype must all provide room for iteration and entertainment, in order to engage users in the exploration of the suggestions. For that reason, it will use a visual output, instead of the regular list of search results.

Methodology

Database: Google Search and Autocomplete feature

The old Google Web Search API [@developers] and Google Images Search API [@google-1] have been deprecated as of 2010 and 2011, respectively. Though still functioning, they will stop operating soon and users are encouraged to replace them by the new Google Custom Search API [@custom]. I started by implementing the search box using the new engine. However, this API is actually intended to perform custom searches inside websites. The purpose of this project, though, was to search from the entire web using the Google Search engine. In the new API this is only possible through a laborious process: setting up a custom search with any given website, then editing your search to include results from the entire web, and removing the website you setup at first.

The main problem though was that the autocomplete did not seem to work for the “entire web” option. To sum up, it does not mimic the regular Google Search as I expected.

After some research, I found a Google Autocomplete “API,” that allows access to Google’s autocomplete results. However, it seems to be an engine developed for internal use, not really an API. There is no official documentation from Google, but a few experiments made by users who tried to document it. [@google-3] One of them wrote a jQuery plugin [@getimagedata] that allows for searches using various Google services, all through the same non-documented API.

Next, to realise the idea of mimicking Google’s original search, I integrated the autocomplete plugin with the official Google Custom Search bar. The resulting search element mixes custom scripts with Google’s regular service.

search_bar

First Iteration

The new custom search is based on simple html tags and does not require javaScript knowledge. However, that makes it also harder to customize. Results can only be displayed according to a predefined set of layouts. The first iteration of this prototype utilises a default layout. Clicking on the tabs, users can search on Google Web or Google Images. These UI elements are all presets from the API.

Google Custom Search default layout
Google Custom Search default layout

The radio buttons at the top of the page change the autocomplete suggestions. This turned out to be confusing, because the default tabs for the search output have similar labels. Also, the resulting layout, similar to a regular list of results, is not visually appealing.

image_search_layout

Second Iteration

The new Search API does not provide access to the results as a structured database, which would make it easy to customise the visual output. The solution was to apply an alternative method:

  • Scrape the html elements from the page, through the document object model.
  • Erase Google’s injected html elements.
  • Store the addresses of the pictures.

The idea for the visual output was to display the images forming the letter typed by the user. To do that through an automated process, it is necessary to have a grid to position the images. This could be done by either:

  • Writing a preset of coordinates for each pixel of each letter.
  • Rasterizing a given letter and storing the pixels coordinates.

Though less precise, the second method seemed faster to develop. It was prototyped using Processing first, in order to evaluate its feasibility and visual result.

pixelated_font

After that, this method was translated to the web page using P5.js [@p5.js], a javaScript library that utilizes Processing syntax. The process turned out to be laborious, though. There seem to be some flaws in the current implementation of the pixel manipulation methods in P5.

Besides, the images could not be loaded using the library, due to browser restrictions — see cross-origin resource sharing [@cross-origin] and P5’s developer Lauren McCarthy commenting on the issue [@loadimage]. The final prototype uses regular img tags, instead.

The script loops through a set of 20 images retrieved, positioning them according to the coordinates of the pixelated character. The following diagram sums up the technical process of this prototype:

diagram

User Tests

The second prototype was published online [@googles] and sent to two different users. Besides the search UI elements, the only information displayed was:

  • Title: “Google’s ABC”
  • Description: “The alphabet through Google’s autocomplete”
  • A question mark linking to a blog post [@thesis] that outlined the project and part of its process.

a

e

The feedback from the users can be separated in 3 categories, according to the problems they address: interactive, visual and communicative/conceptual.

  • Interactive
    • Lag when loading the results, made worse by the lack of feedback from the system.
    • Autocomplete not always responsive.
    • Radio buttons and autocomplete not working well together. Autocomplete options disappear after radio is selected.
  • Visual
    • Visual shapes not always forming recognisable letters.
    • Search box overlapping with letter, making some harder to recognise.
  • Communicative/conceptual
    • Message not clear. Comparing the results and critically thinking about them was only possible after reading the blog post.

Findings and Next Steps

The visual and interactive problems pointed out by the users tests were addressed in the last iteration of this prototype. Their solution was mostly technical and involved fixing some browser compatibility problems.

The communicative and conceptual problems, though, demand deeper changes for future prototypes. The interface is familiar enough to interact with, but its presumed neutrality does not seem to encourage comparison, on an immediate level, nor reflection as an ultimate goal. As for the entertaining purposes of this work, the lack of feedback is not conclusive.

Nevertheless, the technical aspects of this prototype point out to some interesting aspects of how the autocomplete works. An exploration of the letters for each service, Images and Web, reveals significant differences. The former is mainly composed of celebrities, as the latter seems dominated by companies. A more explicit comparison between the two could lead to more engaging interactions.

Because comparison and communication seem to be the main problems to address, these are possible next steps:

  • Compare different systems — Google services? Various search engines?
  • Iterate on ways to make comparisons visual.
  • Sketch visual solutions that to concisely communicate the goals.

Bibliography

“$.getImageData.” 2014. Accessed October 20. http://www.maxnov.com/getimagedata/.
“Aaron Swartz’s Art: What Does Your Google Image Search Look Like in Other Countries? Motherboard.”

2014. Accessed October 14. http://motherboard.vice.com/blog/aaron-swartzs-art.
“About — the Publishing Lab.” 2014. Accessed October 11. http://the-publishing-lab.com/about/.

“Cross-Origin Resource Sharing – Wikipedia, the Free Encyclopedia.” 2014. Accessed October 23. http: //en.wikipedia.org/wiki/Cross-origin_resource_sharing.

“Custom Search — Google Developers.” 2014. Accessed October 17. https://developers.google.com/ custom-search/.

“Developer’s Guide – Google Web Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https://developers.google.com/web-search/docs/.

“Google Autocomplete ‘API’ Shreyas Chand.” 2014. Accessed October 17. http://shreyaschand.com/blog/ 2013/01/03/google-autocomplete-api/.

“Google Image Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https:// developers.google.com/image-search/.

“Google Poetics.” 2014. Accessed October 22. http://www.googlepoetics.com/.

“Google’s ABC.” 2014. Accessed October 25. http://54.204.173.108/parsons/thesis_1/google_alphabet/.

“Google’s Autocomplete Spells Out Our Darkest Thoughts Arwa Mahdawi Comment Is Free The- guardian.com.” 2014. Accessed October 22. http://www.theguardian.com/commentisfree/2013/oct/22/ google-autocomplete-un-women-ad-discrimination-algorithms.

Krug, Steve. 2005. Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition. 2nd edition. Berkeley, Calif: New Riders.

“loadImage() Can No Longer Specify Extension with Second Parameter. · Issue #171 · Lmccart/p5.js.” 2014. Accessed October 23. https://github.com/lmccart/p5.js/issues/171.

“P5.js.” 2014. Accessed October 23. http://p5js.org/.
Paul, Christiane. 2007. The Database as System and Cultural Form: Anatomies of Cultural Narratives. na.

http://visualizinginfo.pbworks.com/f/db-system-culturalform.pdf. 12

“The Dumpster: A Visualization of Romantic Breakups for 2005.” 2014. Accessed October 22. http: //artport.whitney.org/commissions/thedumpster/.

“[Thesis 1] Google ABC – Technical Module Prototype Gabriel.” 2014. Accessed October 25. http:// gabrielmfadt.wordpress.com/2014/10/20/thesis-1-google-abc-technical-module-prototype/.

“Top 15 Most Popular Search Engines October 2014.” 2014. Accessed October 22. http://www.ebizmba.com/ articles/search-engines.

“We Feel Fine / by Jonathan Harris and Sep Kamvar.” 2014. Accessed October 22. http://wefeelfine.org/. Whitelaw, Mitchell. 2014. “FCJ-067 Art Against Information: Case Studies in Data Practice.” Accessed

September 22. http://eleven.fibreculturejournal.org/fcj-067-art-against-information-case-studies-in-data-practice/.

[Thesis 1] Google ABC – Technical Module Prototype

B is for Beyonce, according to Google images
B is for Beyonce, according to Google images

This prototype is still under development, but a functional version can be accessed here.

I’m trying to build a sort of Google ABC — an analogy between alphabet books and Google’s autocomplete engine. The idea is make people reflect about how this system works.
Besides, Google has been our default choice when searching for knowledge for the past years. Choosing an alphabet book as an analogy is a way of commenting on this.

This work is not finished yet, but here’s the process so far:

  1. The old Google Web Search API and Google Images Search API have been deprecated as of 2010 and 2011, respectively. Though still functioning, they will stop operating soon and users are encouraged to replace them by the new Google Custom Search API. I started by implementing the search box using the new engine.
    However, this API is actually intended to perform custom searches inside websites. The purpose of this project, though, was to search from the entire web using the Google Search engine. This is possible through a laborious process: by setting up a custom search with any given website, then editting your search to include results form the entire web, and removing the website you setup at first.
    The main problem though was that the autocomplete did not seem to work for the “entire web” option. To sum up, it does not mimic the regular Google Search as I expected.
  2. I found a Google Autocomplete “API.” It allows access to Google’s autocomplete results. However, it seems to be an engine developed for internal use, not really an API. There is no official documentation from Google.
    After a few more searches, I found a jQuery plugin that allows for searches using various Google services, all through the same non-documented API.
  3. Integrated the autocomplete plugin with the Google Custom Search bar.

    Google Custom Search default layout
    Google Custom Search default layout
  4. The new custom search is based on simple html tags and does not require javaScript knowledge. However, that makes it also harder to customize. Results can only be displayed according to a predefined set of layouts.
    The solution was to scrape the data from the results, erase Google’s injected html elements, and store the pictures sources only.

    Google Custom Search Layouts
  5. I started working on a layout to display the results. My idea was to form the initials using images. To do so, I’d have to grab the letter typed by the user, draw it on a canvas and get the pixel results. I made a short Processing sketch to test the method first.

    Processing sketch to draw pixelated fonts.
  6. Because this small experiment seem to work fine, I decided to implement the page display using P5.js, a javaScript library that utilizes Processing syntax. The process turned out to be laborious, though. There seem to be some flaws in the current implementation of the pixel manipulation methods in P5. Due to browser restrictions — see cross-origin resource sharing and developer Lauren McCarthy’s comments on the P5 Github repository — the images had to be displayed as regular html elements.

Code on Github.