Category Archives: Prototypes

[Thesis 1] Design Brief #4

IMG_6661

IMG_6662

Thesis Question

In digital systems, we access information through databases and software. Graphical interfaces make this process easy and almost invisible. But also might lead to false assumptions and misconceptions derived from the lack of knowledge of the systems we engage with.

This project is concerned with the relation between users and information mediated by software. It will explore how the process of gathering information from databases is shaped by the algorithm that drives it. How much do we know about systems we use daily? How aware are we about possible biases in data retrieval?

Research

Data, Database, and Information

This project aims to provoke reflection on our relationship with online sources of information. Though it might make use of online databases, and retrieve data from them, its main concern is information. Therefore, a distinction between the 3 terms is necessary.

In this project, the term database defines any system of information, not restricted to a digital format. As researcher Christiane Paul points out, “it is essentially a structured collection of data that stands in the tradition of ‘data containers’ such as a book, a library, an archive, or Wunderkammer.” [@paul_database_2007]

Data, as a consequence, is the content of the database, no matter its nature — image, text, or number. At last, information, often referred to as “processed data,” is understood in this project as a piece of content (data) that has been through a process of analysis or interpretation. New media artist and academic Mitchell Whitelaw defines data as “the raw material of information, its substrate”, whereas information “is the meaning derived from data in a particular context.” [@whitelaw_fcj-067_]

Information Arts

Given that the production and circulation of information is the main concern of this project, it is inserted in the field of information arts. This practice is defined by Christiane Paul and Jack Toolin as works that “explore or critically address the issues surrounding the processing and distribution of information by making information itself the medium and carrier of this exploration.” [@paul_encyclopedia_]

Considering that this work utilizes algorithms to process digital information, there is an overlap between information arts and digital, or new media, art. However, both the subject and medium of this project refer to information. Digital media, on the other hand, is only a part of the techniques here employed. Therefore, the category information art will be adopted instead of digital or new media art.

Methodology and Series Concept

This project is the first one of a series intended to provoke discussion on our use of online sources of information. The series is defined by a set of rules:

  • Each work will utilise an online database as a source for a print output. Database here is used in the broad sense of a set of data — from a simple html page to a social network stream, for example.
  • The print output must relate to the source conceptually, making a clear link between the two. It should also evoke familiar print forms — books, diaries, posters, guides, instruction manuals, etc.
  • The prints will be generated through an automated process.
  • Both online sources and print forms should not only have a clear link, but also be relevant today.

rules

Paul and Toolin present a diagram that outlines the general practice of information arts. In this model, the work can either be finished in the stage of information, serving as a new data source, or generate a database or visualization that will then be redistributed. The latter is the case of this project, which will reorganize the data source constructing a new database and then translate it into new forms of communication.

information_arts_diagram

The printed media is utilised in this project as a communication device, for its capacity to evoke cultural meaning embedded into its familiar forms. ABC books, for instance, are directly associated with education, children, language, no matter how diverse their formats and content.

The translation from digital to print also serves the purpose of creating an uncanny effect. Though both online content and printed media must be familiar, their displacement should generate an odd hybrid form. In addition, this process helps bring focus to the dataset, by removing it from its original UI. For example, a comparison of all Google’s autocomplete suggestions from A to Z would not be possible through the search box, because it retrieves results one at a time.

Implementation Prototype: Google’s Supersearch

“Google’s ABC,” the prototype described in the previous paper, did not allow users to easily compare the autocomplete suggestions accross different services — Google Web and Google Images, in that case. This new prototype, “Google’s Supersearch,” was developed in response to that, and allows users to compare autocomplete suggestions from 7 different Google services: Web, Youtube, Recipes, Products, News, Images, and Books.

However, this prototype is not conceptually linked to the series outlined in the methodology section. Most of the questions it tries to answer are technical: what Google services offer access to their autocomplete suggestions? Is it possible to retrieve their results without using any API?

In their paper “What do Prototypes Prototype?,“ Stephanie Houde and Charles Hill proposed a model for categorizing prototypes according to their functions. [@houdc_what_1997] Based on that, “Google’s Supersearch” is considered here an implementation prototype, since most of its questions are related to technical aspects of a larger project.

prototypes

“Google’s Supersearch” has a simple functionality. As users type in the search box, the page displays autocomplete suggestions — from each of the 7 different Google services — as lists into separate boxes. The options are clickable and redirect to a Google search page with the results for the selected term. For example, clicking on “chicken dance” under the Youtube list redirects to an youtube search page with all the results for “chicken dance.” This is achieved using the same Google Autocomplete undocumented “API” [@google] employed in “Google ABC.”

The interface is minimal and has no symbolic associations — the ABC metaphor was removed from it. The only text displayed besides the title and the results is “What does each Google service suggest you?”

The prototype was received by users as a rather practical and useful application, capable of aggregating different functions from Google in a single place.

supersearch

Role Prototype: Google ABC Book (Printed)

The design questions addressed by this prototype were related to the communicative aspects of the project. First, do people recognise it as an ABC book? Do they understand it is intended for children? Second, what do they think about the use of autocomplete suggestions an educational material? Do they perceive any critical message?

The prototype was made by storing the top autocomplete suggestions for each letter, using Google Images. After that, images for each term were searched and downloaded. Then the book was designed in Adobe Illustrator, using one of the many layouts for instant books described by Esther K. Smith in “How to Make Books.“ [@smith_how_2007] It was printed in a single sheet of paper in letter format. This layout does not require any binding or advanced tools besides scissors and glue.

IMG_6659

IMG_6658

The back of the book displays the time and date of that edition, along with the following lines: “Earlier and earlier kids are learning things online. So why not start as soon as possible? This ABC book curated by Google Images brings to your children the top searches for each letter! Kids will love it!”

The response to this prototype was positive, in general. Though some people did not understand exactly how the content was being generated, the critical association between the online source and the print form was clear for most users. Some even stated that giving kids those books could be an interesting experiment, since the celebrities present on it were well-know even for children in an early age. This was a positive outcome of the user test, since the intention of this project is not to point out negative aspects of technology, but instead provoke discussion and reflection.

IMG_6663

Integration Prototype: Google ABC Book (Online)

This prototype combines technical aspects from both previous ones. Its online presence utilizes the same undocumented API [@google] used to retrieve autocomplete suggestions from various Google services. The layout is set so that users can print the page, cut, fold, and make an instant book exactly like the one described in the previous section.

Accessing and manipulating the images was the most challenging part of the development of this prototype. As described in the previous paper, the new Google Custom Search [@custom] does not allow for customization of the results layout. Also, the Google script that receives and displays the images on the page runs on the server side. Therefore, it is not possible to manipulate the images with a script on the client side directly.

On previous prototypes, a quick fix was to use the official custom search and then scrape the results from the page to manipulate them. That would be laborious and not effective for the current prototype, in which 26 searches need to be performed in sequence.

The solution was to use a web scraper running on the server side. The way it works is by emulating a client visiting an url, in this case 26 Google Images pages, one for each term. Then a script grabs the first image from the results and send it back to the client. This was achieved using node.js, a server language based on javaScript syntax, with a combination of modules for web scraping and connecting client and server.

Once that problem was solved, the basic layout was designed, aimed to make the page ready for print. It was achieved through a combination of CSS rules and javaScript.

Some iterations at this point included sliders to change the colors of pictures and letters. This was an attempt to solve the contrast problems, but was discarded to avoid adding extra steps to the generative design process. Instead, the version sent to users displayed the letters in black, with a white outline.

google_abc_3

google_abc_4

google_abc_6

Technically, the prototype works as follows:

  • User selects service. A script runs through the 26 letters of the alphabet and retrieves autocomplete suggestions for each letter using a Google non-documented API. (See design brief #3 for details).
  • By hitting ok to confirm, the user triggers the instant book layout.
  • After the layout is complete, the page calls a web scraper, sending the suggestions stored in the first step — ‘Ariana Grande’ for A, ‘Beyonce’ for B, etc.
  • The scraper runs on node.js, some node modules, and phantomJS. It loads one Google Images page for each term sent by the client — ‘Ariana Grande,’ ‘Beyoncé,’ ‘cat,’ etc.
  • The scraper stores the address of the first image result and send it back to the client.
  • The image is appended to its position, below the initials.
  • If ‘print’ is hit, a new html page is created and the html from the book is sent to it.

technical_diagram

User Tests

The prototype was sent as a web link (http://54.204.173.108:3011) to 22 people, from which 10 sent a response. Only a few had a general idea of the concept, and most of them have not seem the printed prototype first. People were invited to participate through email or in person. Because different aspects of the project were being tested at once, no guidelines were given on what to comment.

The main areas on which this prototype focused were:

  • Implementation: check technical aspects, especially related to the web scraper, concerning speed and stability.
  • Role: questions related to the communicative aspects: is the “ABC Book” metaphor clear? Do people sense a critical tone on the message or not? Does it provoke discussion?

As for the look and feel, it was only partially addressed. The default html styling adopted is far from representing a definitive visual design for the project. Although, the basic wireframe, with the process split into 3 steps — source selection, printed form, and tutorial — was meant to be final.

Up until it was sent to the users, this prototype has been running on a local server, with no major technical problems. That was not the case when it was published on a remote server, though.The time for scraping and loading the images went from less than a minute to about 5 minutes. This had a drastic effect on the user experience. Some participants responded immediately expressing their concern and some ended up not taking part in the test at all.

Technical adjustments were made and the prototype was updated in less than 12 hours after the invitation. In the new version, it takes about 2 seconds to load the first image and 15 to 20 seconds to load all 26.

Once this problem was solved, the responses from the participants did not point out other technical problems. The information gathered after that can be grouped into 4 areas: communication, experience, layout, and instructions. Each one was also split here into feedback and suggestions — except for the first, for which users did not suggest any direct change.

  1. Communication
    • Feedback
      • It is clearly a critical project, not a commercial one.
      • The print aspect works well to capture a moment of something that is constantly changing.
      • It is not clear if the message is genie or ironic.
      • It could be utilized as something practical, to teach computer literacy.
  2. Experience
    • Feedback
      • The questionnaire format is confusing, because there is no interaction after the first step.
      • Instructions and options don’t leave much breathing room.
      • There is no option to personalize the results.
      • Some images are not recognizable, and there is no way to see what they refer to.
    • Suggestions
      • Expand the questionnaire/interaction to the net steps.
      • Allow for theme-based custom results — pop-singers, for instance.
      • Add option to change location, allowing for comparison between countries.
      • Add interaction to images, to display their description/
      • Include option to change each of the results.
  3. Layout
    • Feedback
      • It is hard to see the images, they are too small and the letters cover a big part of them.
      • The upside-down letters are not legible.
      • The layout is not visually exciting.
      • The layout only works for print.
    • Suggestions
      • Add option to control font size.
      • Create two separate layouts: one for the screen and another for print.
      • Show the final product somehow.
  4. Instructions
    • Feedback
      • It is hard to follow the instructions.
      • Some sentences are confusing.
    • Suggestions
      • Moving images or videos would be more clear.

Findings and Next Steps

Some of the responses to the communicative aspect of the project were contradicting. Most people understood the project as somewhat critical, though not sure about its precise intention. That is not a major concern, because leaving room for interpretation and discussion is part of the goal of this project.

As for the problem with the instructions, it can be easily addressed through the use of video or moving images, as suggested in one of the responses.

After the test, some of the users were presented to the printed version of the prototype – none of them printed it from the page. The responses were all positive, as opposed to their opinions about the web page layout. This was an unexpected an yet reasonable problem: since the page was developed as a simple bridge through the source and the print output, not much thought was put into the online experience. As a consequence, if users do not print it, their experience is limited and frustrating. Therefore, experience and layout are the areas that demand deeper and most immediate changes for the next iterations.

The next steps of this project will conduct two simultaneous investigations:

  • Online experience Define a better balance between the print and digital version. The latter should work independently from the former, besides being more interactive and having its own layout.
  • Series Investigate other online sources and print forms that fulfill the rules and conceptual goals of this project. A methodology still in test is to list the most popular and relevant online sources in use today and find their corresponding print forms.

Bibliography

“Custom Search — Google Developers.” 2014. Accessed October 17. https://developers.google.com/ custom-search/.

“Google Autocomplete ‘API’ Shreyas Chand.” 2014. Accessed October 17. http://shreyaschand.com/blog/ 2013/01/03/google-autocomplete-api/.

Houdc, Stephanie, and Charles Hill. 1997. “What Do Prototypes Prototype?” http://www.itu.dk/~malmborg/ Interaktionsdesign/Kompendie/Houde-Hill-1997.pdf.

Paul, Christiane. 2007. The Database as System and Cultural Form: Anatomies of Cultural Narratives. na. http://visualizinginfo.pbworks.com/f/db-system-culturalform.pdf.

Paul, Christiane, and Jack Toolin. 2014. “Encyclopedia of Library and Information Sciences.” Accessed November 26. http://www.tandfonline.com/doi/abs/10.1081/E-ELIS3-120043697.

Smith, Esther K. 2007. How to Make Books: Fold, Cut & Stitch Your Way to a One-of-a-Kind Book. 1St Edition edition. New York: Potter Craft.

Whitelaw, Mitchell. 2014. “FCJ-067 Art Against Information: Case Studies in Data Practice.” Accessed September 22. http://eleven.fibreculturejournal.org/fcj-067-art-against-information-case-studies-in-data-practice/.

[Thesis 1] The (Generative!) Google ABC Book

abc

This project can be accessed here.

Idea
An ABC book based on Google’s Autocomplete. Suggestions for each letter are automatically retrieved and may vary depending on the service selected by the user — Images, Books, youtube, etc.
The layout is set so that users can print the page, cut, fold, and make a (physical) instant book.

Concept
We spend our lives engaged with data systems, though we see them only through graphical interfaces. Even so, we don’t know much about how they work and why they came to be this way. This project is part of a series of experiments that try to shed a light on this issue, making people think about their relationship with online sources of knowledge.
The idea of printing an ABC book out of Google’s Autocomplete suggestions creates an odd juxtaposition: though these 2 things are familiar, they’re not meant to be together. Even so, they both constitute relevant sources of knowledge today.

More information about the series can be found here.

Iterations
This work is a sequence of 2 previous projects:
* Google ABC
* Google Supersearch

The iterations of this particular one were as described below:
1. User input limited to the selection of service.
google_abc_3

2. Color effects on the pictures. User could choose foreground and background colors. This was an attempt to solve the visual contrast issues.
google_abc_4

3. Content edited as a step-by-step tutorial. Functionality to print added. Also added instructions on how to make the physical book.google_abc_6

 

Technology
The project runs as described below:

* User select service. A script runs through the 26 letters of the alphabet and retrieves autocomplete suggestions for each letter using a Google non-documented API. (See design brief #3 for details).
* By hitting ok to confirm, the user triggers the instant book layout. This one is achieved through a combination of css rules and javaScript.
* After the layout is complete, the page calls a web scraper, sending the suggestions stored in the first step — ‘Ariana Grande’ for A, ‘Beyonce’ for B, etc.
* The scraper runs on node.js, some node modules, and phantomjs. It loads a Google Images page searching for the term sent by the client — ‘Ariana Grande,’ for instance.
* The scraper stores the address of the first image result and send it back to the client.
* The image is appended to its position, above the initials.
* If ‘print’ is hit, a new html page is created and the html from the book is sent to it.

The code for this project is on Github.

[Thesis 1] Google’s Supersearch – Evaluative Module Prototype

supersearch

This project retrieves information from Google’s autocomplete — more specifically, from 7 different Google services.

One thing that wasn’t much clear on the  previous prototype, is that autocomplete depends on the service you’re using — Youtube, Images, Web, etc. I found this interesting and built something that allow people to compare these suggestions in just one page.

By clicking on the links, users are redirected to the search results of that term.

[Thesis 1] Design Brief #3

Thesis Question

In digital systems, we access information through databases and software. Graphical interfaces make this process easy and almost invisible. But also might lead to false assumptions and misconceptions derived from the lack of knowledge of the systems we engage with.

This project is concerned with the relation between users and data mediated by software. It will explore how the process of gathering information from databases is shaped by the algorithm that drives it. How much do we know about systems we use daily? How aware are we about possible biases in data retrieval? How can computerised systems be transparent and yet user-friendly?

Research

Databases

In this project, the term database defines any system of information, not restricted to a digital format. As researcher Christiane Paul points out, “it is essentially a structured collection of data that stands in the tradition of ‘data containers’ such as a book, a library, an archive, or Wunderkammer.” [@paul_database_2007]

Though analog and digital databases share this basic definition, the latter allows for multiple ways of retrieving and filtering the data it contains. To do so, it should be organised according to a data model, that is, the structural format of how the data is stored: hierarchical, relational, etc. Also, it should comprise the software that provides access to the data containers. At last, according to Paul’s definition, another component of digital databases are the users, “who add a further level in understanding the data as information.” [@paul_database_2007]

Data versus Information

This distinction between data and information is important when analysing the impact of algorithms in the data retrieval. Presenting a critical retrospective of data artworks, Mitchell Whitelaw defines data as “the raw material of information, its substrate”, whereas information “is the meaning derived from data in a particular context.” [@whitelaw_fcj-067_] In Whitelaw’s opinion, any rearrangement of the former is directed towards constructing the latter.

Whitelaw’s criticism is particularly directed towards works where this distinction is blurred. Writing about We Feel Fine [@we] and The Dumpster [@dumpster:], he states that “these works rely on a long chain of signification: (reality); blog; data harvesting; data analysis; visualisation; interface. Yet they maintain a strangely naive sense of unmediated presentation.” [@whitelaw_fcj-067_] Therefore, the mediation between user and database in these works lead to a misconception of data and information.

Search Boxes

Although Whitelaw writes about those issues in the context of data art, they can be extended to our everyday use of the web. As he writes in the introduction of his article, “in digital, networked culture, we spend our lives engaged with data systems. (…) The web is increasingly a set of interfaces to datasets.” [@whitelaw_fcj-067_] Search boxes constitute a particularly interesting interface in the context of this prototype. Their path to the data is not explicit, as opposed to a set of hyperlinks that might give users a notion of how the data is structured. Also, its seeming neutrality, usually a single html input element followed by a button, reveal very little of the search engine itself.

An exception for that is the autocomplete feature, that tries to predict users’ input before they finish typing it. By doing so, it retrieves subsets of the data and, as a consequence, part of the algorithm behind it.

Google’s Autocomplete

Writing about how users interact with search boxes, Steve Krug tells a story from user tests he performed: “My favorite example is the people (…) who will type a site’s entire URL in the Yahoo search box every time they want to go there (…). If you ask them about it, it becomes clear that some of them think that Yahoo is the Internet, and that this is the way you use it.” [@krug_dont_2005] Krug’s book was originally published in 2000. 14 years later, users might not make the URL mistake anymore and Google has by far surpassed Yahoo! in popularity — approximately 1 billion unique monthly visitors versus 300 millions. [@top] However, the notion of a search engine as the web seems to prevail.

In an article for The Atlantic about Google’s autocomplete history, Megan Garber says that by typing a search query “you get a textual snapshot of humanity’s collective psyche.” The same notion is present in a series of reports from the website Mashable.com, with headlines like: “The United States of America According to Google Autocomplete,” and “Google Autocomplete Proves Summer Is Melting Our Brains.” [@google-2] Though the headlines are often humorous, they convey an idea of Google’s autocomplete as an accurate representation of our collective behaviour.

Some reports about the feature are more critical, though. An article on The Guardian about the UN Women’s campaign based on Google’s autocomplete states that Google’s autocomplete “isn’t always an entirely accurate reflection of the collective psyche. (…) What’s more, autocomplete also serves as a worrying indicator of how, in the interests of efficiency, we’re gradually letting technology complete our thought processes for us.” [@googles-1]

UN Women ad campaign uses Google’s autocomplete to point out sexism

According to Google’s own support page, the suggestions are “automatically generated by an algorithm without any human involvement, based on a number of objective factors.” The sense of neutrality on this sentence contradicts the following one, where the algorithm “automatically detects and excludes a small set of search terms. But it’s designed to reflect the diversity of our users’ searches and content on the web.” Therefore, the system is automated, but not free from human decisions.

Precedent

Image Atlas

Image Atlas [@about] was created by Aaron Swartz and Taryn Simon during a hackathon promoted by arts organization Rhizome. It takes a search term as an input, translates it to 17 different languages, and perform searches on Google Images of 17 different countries using the translated term.

Presenting the project, Aaron said that “these sort of neutral tools like Facebook and Google and so on, which claim to present an almost unmediated view of the world, through statistics and algorithms and analyses, in fact are programmed and are programming us. So we wanted to find a way to visualize that, to expose some of the value judgments that get made.” [@aaron] In other words, Aaron and Taryn’s project was intended to provoke reflection about Google’s Image Search engine.

Project Concept

This prototype will utilize Google’s Autocomplete suggestions as a means to illuminate how the search engine works. It is not intended to make a moral judgement on the subject. Instead, its purpose is to lead users to question their own knowledge of the system.

The Autocomplete engine provides different results depending on the Google service being used. To provide comparison, this prototype will allow access to the suggestions from Google Web and Google Images.

The user input will be restricted to one character only. The reasons for this constrain are:

  • lower complexity; a reduced number of suggestions provide less and clearer comparisons.
  • emphasis on the system’s response, not the user input.

Apart from the input restriction, the prototype will use a familiar interface: a regular search box followed by a button. The prototype must all provide room for iteration and entertainment, in order to engage users in the exploration of the suggestions. For that reason, it will use a visual output, instead of the regular list of search results.

Methodology

Database: Google Search and Autocomplete feature

The old Google Web Search API [@developers] and Google Images Search API [@google-1] have been deprecated as of 2010 and 2011, respectively. Though still functioning, they will stop operating soon and users are encouraged to replace them by the new Google Custom Search API [@custom]. I started by implementing the search box using the new engine. However, this API is actually intended to perform custom searches inside websites. The purpose of this project, though, was to search from the entire web using the Google Search engine. In the new API this is only possible through a laborious process: setting up a custom search with any given website, then editing your search to include results from the entire web, and removing the website you setup at first.

The main problem though was that the autocomplete did not seem to work for the “entire web” option. To sum up, it does not mimic the regular Google Search as I expected.

After some research, I found a Google Autocomplete “API,” that allows access to Google’s autocomplete results. However, it seems to be an engine developed for internal use, not really an API. There is no official documentation from Google, but a few experiments made by users who tried to document it. [@google-3] One of them wrote a jQuery plugin [@getimagedata] that allows for searches using various Google services, all through the same non-documented API.

Next, to realise the idea of mimicking Google’s original search, I integrated the autocomplete plugin with the official Google Custom Search bar. The resulting search element mixes custom scripts with Google’s regular service.

search_bar

First Iteration

The new custom search is based on simple html tags and does not require javaScript knowledge. However, that makes it also harder to customize. Results can only be displayed according to a predefined set of layouts. The first iteration of this prototype utilises a default layout. Clicking on the tabs, users can search on Google Web or Google Images. These UI elements are all presets from the API.

Google Custom Search default layout
Google Custom Search default layout

The radio buttons at the top of the page change the autocomplete suggestions. This turned out to be confusing, because the default tabs for the search output have similar labels. Also, the resulting layout, similar to a regular list of results, is not visually appealing.

image_search_layout

Second Iteration

The new Search API does not provide access to the results as a structured database, which would make it easy to customise the visual output. The solution was to apply an alternative method:

  • Scrape the html elements from the page, through the document object model.
  • Erase Google’s injected html elements.
  • Store the addresses of the pictures.

The idea for the visual output was to display the images forming the letter typed by the user. To do that through an automated process, it is necessary to have a grid to position the images. This could be done by either:

  • Writing a preset of coordinates for each pixel of each letter.
  • Rasterizing a given letter and storing the pixels coordinates.

Though less precise, the second method seemed faster to develop. It was prototyped using Processing first, in order to evaluate its feasibility and visual result.

pixelated_font

After that, this method was translated to the web page using P5.js [@p5.js], a javaScript library that utilizes Processing syntax. The process turned out to be laborious, though. There seem to be some flaws in the current implementation of the pixel manipulation methods in P5.

Besides, the images could not be loaded using the library, due to browser restrictions — see cross-origin resource sharing [@cross-origin] and P5’s developer Lauren McCarthy commenting on the issue [@loadimage]. The final prototype uses regular img tags, instead.

The script loops through a set of 20 images retrieved, positioning them according to the coordinates of the pixelated character. The following diagram sums up the technical process of this prototype:

diagram

User Tests

The second prototype was published online [@googles] and sent to two different users. Besides the search UI elements, the only information displayed was:

  • Title: “Google’s ABC”
  • Description: “The alphabet through Google’s autocomplete”
  • A question mark linking to a blog post [@thesis] that outlined the project and part of its process.

a

e

The feedback from the users can be separated in 3 categories, according to the problems they address: interactive, visual and communicative/conceptual.

  • Interactive
    • Lag when loading the results, made worse by the lack of feedback from the system.
    • Autocomplete not always responsive.
    • Radio buttons and autocomplete not working well together. Autocomplete options disappear after radio is selected.
  • Visual
    • Visual shapes not always forming recognisable letters.
    • Search box overlapping with letter, making some harder to recognise.
  • Communicative/conceptual
    • Message not clear. Comparing the results and critically thinking about them was only possible after reading the blog post.

Findings and Next Steps

The visual and interactive problems pointed out by the users tests were addressed in the last iteration of this prototype. Their solution was mostly technical and involved fixing some browser compatibility problems.

The communicative and conceptual problems, though, demand deeper changes for future prototypes. The interface is familiar enough to interact with, but its presumed neutrality does not seem to encourage comparison, on an immediate level, nor reflection as an ultimate goal. As for the entertaining purposes of this work, the lack of feedback is not conclusive.

Nevertheless, the technical aspects of this prototype point out to some interesting aspects of how the autocomplete works. An exploration of the letters for each service, Images and Web, reveals significant differences. The former is mainly composed of celebrities, as the latter seems dominated by companies. A more explicit comparison between the two could lead to more engaging interactions.

Because comparison and communication seem to be the main problems to address, these are possible next steps:

  • Compare different systems — Google services? Various search engines?
  • Iterate on ways to make comparisons visual.
  • Sketch visual solutions that to concisely communicate the goals.

Bibliography

“$.getImageData.” 2014. Accessed October 20. http://www.maxnov.com/getimagedata/.
“Aaron Swartz’s Art: What Does Your Google Image Search Look Like in Other Countries? Motherboard.”

2014. Accessed October 14. http://motherboard.vice.com/blog/aaron-swartzs-art.
“About — the Publishing Lab.” 2014. Accessed October 11. http://the-publishing-lab.com/about/.

“Cross-Origin Resource Sharing – Wikipedia, the Free Encyclopedia.” 2014. Accessed October 23. http: //en.wikipedia.org/wiki/Cross-origin_resource_sharing.

“Custom Search — Google Developers.” 2014. Accessed October 17. https://developers.google.com/ custom-search/.

“Developer’s Guide – Google Web Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https://developers.google.com/web-search/docs/.

“Google Autocomplete ‘API’ Shreyas Chand.” 2014. Accessed October 17. http://shreyaschand.com/blog/ 2013/01/03/google-autocomplete-api/.

“Google Image Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https:// developers.google.com/image-search/.

“Google Poetics.” 2014. Accessed October 22. http://www.googlepoetics.com/.

“Google’s ABC.” 2014. Accessed October 25. http://54.204.173.108/parsons/thesis_1/google_alphabet/.

“Google’s Autocomplete Spells Out Our Darkest Thoughts Arwa Mahdawi Comment Is Free The- guardian.com.” 2014. Accessed October 22. http://www.theguardian.com/commentisfree/2013/oct/22/ google-autocomplete-un-women-ad-discrimination-algorithms.

Krug, Steve. 2005. Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition. 2nd edition. Berkeley, Calif: New Riders.

“loadImage() Can No Longer Specify Extension with Second Parameter. · Issue #171 · Lmccart/p5.js.” 2014. Accessed October 23. https://github.com/lmccart/p5.js/issues/171.

“P5.js.” 2014. Accessed October 23. http://p5js.org/.
Paul, Christiane. 2007. The Database as System and Cultural Form: Anatomies of Cultural Narratives. na.

http://visualizinginfo.pbworks.com/f/db-system-culturalform.pdf. 12

“The Dumpster: A Visualization of Romantic Breakups for 2005.” 2014. Accessed October 22. http: //artport.whitney.org/commissions/thedumpster/.

“[Thesis 1] Google ABC – Technical Module Prototype Gabriel.” 2014. Accessed October 25. http:// gabrielmfadt.wordpress.com/2014/10/20/thesis-1-google-abc-technical-module-prototype/.

“Top 15 Most Popular Search Engines October 2014.” 2014. Accessed October 22. http://www.ebizmba.com/ articles/search-engines.

“We Feel Fine / by Jonathan Harris and Sep Kamvar.” 2014. Accessed October 22. http://wefeelfine.org/. Whitelaw, Mitchell. 2014. “FCJ-067 Art Against Information: Case Studies in Data Practice.” Accessed

September 22. http://eleven.fibreculturejournal.org/fcj-067-art-against-information-case-studies-in-data-practice/.

[Thesis 1] Google ABC – Technical Module Prototype

B is for Beyonce, according to Google images
B is for Beyonce, according to Google images

This prototype is still under development, but a functional version can be accessed here.

I’m trying to build a sort of Google ABC — an analogy between alphabet books and Google’s autocomplete engine. The idea is make people reflect about how this system works.
Besides, Google has been our default choice when searching for knowledge for the past years. Choosing an alphabet book as an analogy is a way of commenting on this.

This work is not finished yet, but here’s the process so far:

  1. The old Google Web Search API and Google Images Search API have been deprecated as of 2010 and 2011, respectively. Though still functioning, they will stop operating soon and users are encouraged to replace them by the new Google Custom Search API. I started by implementing the search box using the new engine.
    However, this API is actually intended to perform custom searches inside websites. The purpose of this project, though, was to search from the entire web using the Google Search engine. This is possible through a laborious process: by setting up a custom search with any given website, then editting your search to include results form the entire web, and removing the website you setup at first.
    The main problem though was that the autocomplete did not seem to work for the “entire web” option. To sum up, it does not mimic the regular Google Search as I expected.
  2. I found a Google Autocomplete “API.” It allows access to Google’s autocomplete results. However, it seems to be an engine developed for internal use, not really an API. There is no official documentation from Google.
    After a few more searches, I found a jQuery plugin that allows for searches using various Google services, all through the same non-documented API.
  3. Integrated the autocomplete plugin with the Google Custom Search bar.

    Google Custom Search default layout
    Google Custom Search default layout
  4. The new custom search is based on simple html tags and does not require javaScript knowledge. However, that makes it also harder to customize. Results can only be displayed according to a predefined set of layouts.
    The solution was to scrape the data from the results, erase Google’s injected html elements, and store the pictures sources only.

    Google Custom Search Layouts
  5. I started working on a layout to display the results. My idea was to form the initials using images. To do so, I’d have to grab the letter typed by the user, draw it on a canvas and get the pixel results. I made a short Processing sketch to test the method first.

    Processing sketch to draw pixelated fonts.
  6. Because this small experiment seem to work fine, I decided to implement the page display using P5.js, a javaScript library that utilizes Processing syntax. The process turned out to be laborious, though. There seem to be some flaws in the current implementation of the pixel manipulation methods in P5. Due to browser restrictions — see cross-origin resource sharing and developer Lauren McCarthy’s comments on the P5 Github repository — the images had to be displayed as regular html elements.

Code on Github.

[Thesis 1] Methodological Module Prototype

city_wings

City of Angels and Wings of Desire: Nicolas Cage might be the only connection between this prototype and ‘internet culture.’

This prototype is a visual comparison of sound and character lines from 2 movies: City of Angels and Wings of Desire. It is an attempt to:
* Apply Lev Manovich’s method from “Visualizing Vertov” to sound, instead of shots.
* Determine if/how patterns emerge from a sound analysis of a movie.

This analysis would be completely useless without data visualizer’s best friend: comparison. That’s the reason why I’m showing 2 movies instead of one. Choosing an European movie and its American remake seemed like a good comparison for insights.

Screen Shot 2014-10-04 at 2.04.34 PM

How to read it
The area chart shows the relative sound volume from each movie. Because I have no access to good digital copies of any of these movies, made no sense to draw a scale with absolute values.
The red rectangles above the chart show dialogues. I used data from subtitles to draw them.
The slider and play button work like in a regular audio player.
Click here to access the prototype.

Browser compatibility
* Safari: best results
* Chrome: slow interface, but fully operational
* Firefox: might not work

Also, bear in mind that the sound files from these movies are huge. I did my best to manage all sound loading events. Expect some delay before playing, though.

Notes
The process and reasons behind this prototype are documented in this presentation. In-depth explanations will be presented in the yet-to-come paper. I’ll update this post as soon as it is ready.