[Thesis 1] Google’s Supersearch – Evaluative Module Prototype

supersearch

This project retrieves information from Google’s autocomplete — more specifically, from 7 different Google services.

One thing that wasn’t much clear on the  previous prototype, is that autocomplete depends on the service you’re using — Youtube, Images, Web, etc. I found this interesting and built something that allow people to compare these suggestions in just one page.

By clicking on the links, users are redirected to the search results of that term.

[thesis 1] Interview #1 – Jer Thorp

Bio

Jer Thorp defines his works as ‘software-based’ and ‘data-focused.’ He is the co-founder of The Office for Creative Research, along with Ben Rubin and Mark Hansen, and teaches at NYU’s ITP Program.

Some of his works pre-OCR include the algorithm for the 9/11 Memorial and Project Cascade. The latter was developed by Jer as a Data Artist in Residence at the NY Times R&D Group.

Interview

I talked to Jer for about 40 minutes. I didn’t conduct a formal interview. Instead, we talked about some of my ideas for this project and things he was working on. I tried to expand this to a conversation about broader themes in the field as well — future directions for data visualization, the role of data art, cultural changes related to data, etc. That is why this transcription doesn’t follow a Q&A format.

On the idea of ‘direct visualization’ as a form of representation closer to the artifact (the thing itself)

That’s a philosophically very deep question. You can get closer to the measurement, but it’s very hard to get close to the thing. That requires you analysing not only the act of representation, but you have to also consider the act of measurement, you have to consider the intent of the measurement, it’s all built into it. In that case [Manovich’s film visualizations], I would consider the dataset this kind of analytics that’s running on the images. There’s a lot of decisions being made there.

On the limitations of algorithmic-based cultural analysis

There’s this myth that the computer and algorithms allow you some type of purity. That is not true at all. This analytics allow us to do things we’ve never been able to do before. But they also don’t mean that we can dispose of ethnographical and the old-fashioned ‘talk to people,’ and do some journalistic research and so on. [Cultural analytics] It’s a tool, and a very powerful one. But it needs to be paired with other pieces.

About 3d (sculptural) visualisations versus 2d, in terms of perception

Scale and perspective make a huge difference for everything. I’m skeptical about those studies though, because if you dig up the papers half of them were done with a group of about 20, almost all white, grad students. And then we build all of our decisions based on them.

For us [OCR], it’s all about communication, which is a different thing. David Carson has this famous quote: “don’t confuse legibility with communication.” Because most of the times in our projects our fundamental goal is not to allow people to see “this is 6.8 or 6.2,“ but instead to give them a way to interpret that, or to have a feeling about it, or to construct a narrative from there. Of course rigour is important, so we’re not gonna show things in a way that is misleading. But I don’t believe in best practices in that sense. What’s best practices for your magazine might not be best practices for another magazine. And what if I’m projecting on a wall on a building, or what if it’s a sculptural form? These are things that we have no rules for, which I think is why I like those things more.

About future directions for the data visualization field

We’ve been working a lot with performance, and trying to think what that means perform data. We’re doing a long residency with the Museum of Modern Art. It’s an algorithmic performance that we write the scripts for actors that perform them in a gallery. It’s very much like traditional theatre in a way, but the content is generated using this data techniques we developed. That’s pretty interesting to me.

I think sculpture, data in a physical form it’s still in its infancy. Most of the works we’ve seen in the past 6 years look like something you just pulled out of the screen and plotted on a table. A lot of them look like a 3d renders, because of the 3d printers. There’s a ton of possibilities that haven’t been explored. We think about shape a lot in sculpture, but we don’t think a lot about material, its relation to the body or to a room, architecture, design, temperature… There’s a ton of room to do interesting things in that department.

On the reasons for a large number of recent data-related projects

Something about this movement has to do with something that’s happening in culture. It’s about and around this kind of data-based transformation that’s happening in the world. So it’s less about how it’s being done and more about why. It’s about the NSA, radical changes in transparency, wearable sensors, all those things coming together in this really big cultural change.

I’m definitely skeptical about the advertising slogans that have been used to promote this stuff, but I’m optimistic about its potential. Last year was really interesting, because for the first time we started to have real conversations about the exclusionary nature of big data. What does data mean for underprivileged communities? What decisions have been made based on a completely white North-American frame of thought? And how can we do better? That to me is really exciting.

The work we do here and the work that are a lot of people are doing is fundamentally about trying to push a cultural change on how we think about data. And that is gonna take a long time, it’s probably gonna require a lot more than a 9-person studio pushing against it, but what we need is a generation of people who understand data and collectively can make decisions.

Most people don’t even have a good understanding of what is data. And it’s fundamentally easy to talk about data as measurements of something. If you want to be more accurate, you can say it is records of measurement of something. And it’s important to include ‘measurement’ in it because it is a human act. So it is a human artefact. We can program machines to do it, but they’re still doing it based on our decisions — until A.I., strong A.I. development there’s no data that is not fundamentally human data.

Take-aways

This interview was made in a previous phase of my project, when my main questions were about representation — how to get close to the thing, or the artifact. It had some impact on my later decisions, leading me to turn my focus to the process instead of the result. In other words, if every data visualization process is based on decisions and implies lossy, is it possible to make it transparent? Can data visualization lead to a better understanding of data itself?

[printmaking] Pomodoro

Since January, I’ve been tracking my extra-class activity at Parsons and also the time I spend on freelance projects. I don’t use any software, just a spreadsheet:

date

class

project

activity

start

pomodoros

10/26/2014

printmaking

abc

designing

17:00

///

— where ‘pomodoro’ refers to the Pomodoro technique. Each slash here represents half an hour.

I haven’t done anything useful with data about Parsons so far — the one about freelance projects has helped me plan things better. So I wrote a openFrameworks sketch to visualize it, reading the data from Spring 2014:

pomodoro_1 pomodoro_2 pomodoro_3

The representations are far from being readable — the list at the top is much more useful. But my idea was to print it as an abstraction anyway.
I printed the second version and traced it using a ballpoint pen and a sharpie. I added some textures to get away from the original visualization a bit more.

IMG_3153 IMG_3154 IMG_3160_2 IMG_3160 IMG_3161 IMG_3162

I like the rice paper texture better (blue print), but the Speckletone paper definitely works best for lithography.

[Thesis 1] Design Brief #3

Thesis Question

In digital systems, we access information through databases and software. Graphical interfaces make this process easy and almost invisible. But also might lead to false assumptions and misconceptions derived from the lack of knowledge of the systems we engage with.

This project is concerned with the relation between users and data mediated by software. It will explore how the process of gathering information from databases is shaped by the algorithm that drives it. How much do we know about systems we use daily? How aware are we about possible biases in data retrieval? How can computerised systems be transparent and yet user-friendly?

Research

Databases

In this project, the term database defines any system of information, not restricted to a digital format. As researcher Christiane Paul points out, “it is essentially a structured collection of data that stands in the tradition of ‘data containers’ such as a book, a library, an archive, or Wunderkammer.” [@paul_database_2007]

Though analog and digital databases share this basic definition, the latter allows for multiple ways of retrieving and filtering the data it contains. To do so, it should be organised according to a data model, that is, the structural format of how the data is stored: hierarchical, relational, etc. Also, it should comprise the software that provides access to the data containers. At last, according to Paul’s definition, another component of digital databases are the users, “who add a further level in understanding the data as information.” [@paul_database_2007]

Data versus Information

This distinction between data and information is important when analysing the impact of algorithms in the data retrieval. Presenting a critical retrospective of data artworks, Mitchell Whitelaw defines data as “the raw material of information, its substrate”, whereas information “is the meaning derived from data in a particular context.” [@whitelaw_fcj-067_] In Whitelaw’s opinion, any rearrangement of the former is directed towards constructing the latter.

Whitelaw’s criticism is particularly directed towards works where this distinction is blurred. Writing about We Feel Fine [@we] and The Dumpster [@dumpster:], he states that “these works rely on a long chain of signification: (reality); blog; data harvesting; data analysis; visualisation; interface. Yet they maintain a strangely naive sense of unmediated presentation.” [@whitelaw_fcj-067_] Therefore, the mediation between user and database in these works lead to a misconception of data and information.

Search Boxes

Although Whitelaw writes about those issues in the context of data art, they can be extended to our everyday use of the web. As he writes in the introduction of his article, “in digital, networked culture, we spend our lives engaged with data systems. (…) The web is increasingly a set of interfaces to datasets.” [@whitelaw_fcj-067_] Search boxes constitute a particularly interesting interface in the context of this prototype. Their path to the data is not explicit, as opposed to a set of hyperlinks that might give users a notion of how the data is structured. Also, its seeming neutrality, usually a single html input element followed by a button, reveal very little of the search engine itself.

An exception for that is the autocomplete feature, that tries to predict users’ input before they finish typing it. By doing so, it retrieves subsets of the data and, as a consequence, part of the algorithm behind it.

Google’s Autocomplete

Writing about how users interact with search boxes, Steve Krug tells a story from user tests he performed: “My favorite example is the people (…) who will type a site’s entire URL in the Yahoo search box every time they want to go there (…). If you ask them about it, it becomes clear that some of them think that Yahoo is the Internet, and that this is the way you use it.” [@krug_dont_2005] Krug’s book was originally published in 2000. 14 years later, users might not make the URL mistake anymore and Google has by far surpassed Yahoo! in popularity — approximately 1 billion unique monthly visitors versus 300 millions. [@top] However, the notion of a search engine as the web seems to prevail.

In an article for The Atlantic about Google’s autocomplete history, Megan Garber says that by typing a search query “you get a textual snapshot of humanity’s collective psyche.” The same notion is present in a series of reports from the website Mashable.com, with headlines like: “The United States of America According to Google Autocomplete,” and “Google Autocomplete Proves Summer Is Melting Our Brains.” [@google-2] Though the headlines are often humorous, they convey an idea of Google’s autocomplete as an accurate representation of our collective behaviour.

Some reports about the feature are more critical, though. An article on The Guardian about the UN Women’s campaign based on Google’s autocomplete states that Google’s autocomplete “isn’t always an entirely accurate reflection of the collective psyche. (…) What’s more, autocomplete also serves as a worrying indicator of how, in the interests of efficiency, we’re gradually letting technology complete our thought processes for us.” [@googles-1]

UN Women ad campaign uses Google’s autocomplete to point out sexism

According to Google’s own support page, the suggestions are “automatically generated by an algorithm without any human involvement, based on a number of objective factors.” The sense of neutrality on this sentence contradicts the following one, where the algorithm “automatically detects and excludes a small set of search terms. But it’s designed to reflect the diversity of our users’ searches and content on the web.” Therefore, the system is automated, but not free from human decisions.

Precedent

Image Atlas

Image Atlas [@about] was created by Aaron Swartz and Taryn Simon during a hackathon promoted by arts organization Rhizome. It takes a search term as an input, translates it to 17 different languages, and perform searches on Google Images of 17 different countries using the translated term.

Presenting the project, Aaron said that “these sort of neutral tools like Facebook and Google and so on, which claim to present an almost unmediated view of the world, through statistics and algorithms and analyses, in fact are programmed and are programming us. So we wanted to find a way to visualize that, to expose some of the value judgments that get made.” [@aaron] In other words, Aaron and Taryn’s project was intended to provoke reflection about Google’s Image Search engine.

Project Concept

This prototype will utilize Google’s Autocomplete suggestions as a means to illuminate how the search engine works. It is not intended to make a moral judgement on the subject. Instead, its purpose is to lead users to question their own knowledge of the system.

The Autocomplete engine provides different results depending on the Google service being used. To provide comparison, this prototype will allow access to the suggestions from Google Web and Google Images.

The user input will be restricted to one character only. The reasons for this constrain are:

  • lower complexity; a reduced number of suggestions provide less and clearer comparisons.
  • emphasis on the system’s response, not the user input.

Apart from the input restriction, the prototype will use a familiar interface: a regular search box followed by a button. The prototype must all provide room for iteration and entertainment, in order to engage users in the exploration of the suggestions. For that reason, it will use a visual output, instead of the regular list of search results.

Methodology

Database: Google Search and Autocomplete feature

The old Google Web Search API [@developers] and Google Images Search API [@google-1] have been deprecated as of 2010 and 2011, respectively. Though still functioning, they will stop operating soon and users are encouraged to replace them by the new Google Custom Search API [@custom]. I started by implementing the search box using the new engine. However, this API is actually intended to perform custom searches inside websites. The purpose of this project, though, was to search from the entire web using the Google Search engine. In the new API this is only possible through a laborious process: setting up a custom search with any given website, then editing your search to include results from the entire web, and removing the website you setup at first.

The main problem though was that the autocomplete did not seem to work for the “entire web” option. To sum up, it does not mimic the regular Google Search as I expected.

After some research, I found a Google Autocomplete “API,” that allows access to Google’s autocomplete results. However, it seems to be an engine developed for internal use, not really an API. There is no official documentation from Google, but a few experiments made by users who tried to document it. [@google-3] One of them wrote a jQuery plugin [@getimagedata] that allows for searches using various Google services, all through the same non-documented API.

Next, to realise the idea of mimicking Google’s original search, I integrated the autocomplete plugin with the official Google Custom Search bar. The resulting search element mixes custom scripts with Google’s regular service.

search_bar

First Iteration

The new custom search is based on simple html tags and does not require javaScript knowledge. However, that makes it also harder to customize. Results can only be displayed according to a predefined set of layouts. The first iteration of this prototype utilises a default layout. Clicking on the tabs, users can search on Google Web or Google Images. These UI elements are all presets from the API.

Google Custom Search default layout
Google Custom Search default layout

The radio buttons at the top of the page change the autocomplete suggestions. This turned out to be confusing, because the default tabs for the search output have similar labels. Also, the resulting layout, similar to a regular list of results, is not visually appealing.

image_search_layout

Second Iteration

The new Search API does not provide access to the results as a structured database, which would make it easy to customise the visual output. The solution was to apply an alternative method:

  • Scrape the html elements from the page, through the document object model.
  • Erase Google’s injected html elements.
  • Store the addresses of the pictures.

The idea for the visual output was to display the images forming the letter typed by the user. To do that through an automated process, it is necessary to have a grid to position the images. This could be done by either:

  • Writing a preset of coordinates for each pixel of each letter.
  • Rasterizing a given letter and storing the pixels coordinates.

Though less precise, the second method seemed faster to develop. It was prototyped using Processing first, in order to evaluate its feasibility and visual result.

pixelated_font

After that, this method was translated to the web page using P5.js [@p5.js], a javaScript library that utilizes Processing syntax. The process turned out to be laborious, though. There seem to be some flaws in the current implementation of the pixel manipulation methods in P5.

Besides, the images could not be loaded using the library, due to browser restrictions — see cross-origin resource sharing [@cross-origin] and P5’s developer Lauren McCarthy commenting on the issue [@loadimage]. The final prototype uses regular img tags, instead.

The script loops through a set of 20 images retrieved, positioning them according to the coordinates of the pixelated character. The following diagram sums up the technical process of this prototype:

diagram

User Tests

The second prototype was published online [@googles] and sent to two different users. Besides the search UI elements, the only information displayed was:

  • Title: “Google’s ABC”
  • Description: “The alphabet through Google’s autocomplete”
  • A question mark linking to a blog post [@thesis] that outlined the project and part of its process.

a

e

The feedback from the users can be separated in 3 categories, according to the problems they address: interactive, visual and communicative/conceptual.

  • Interactive
    • Lag when loading the results, made worse by the lack of feedback from the system.
    • Autocomplete not always responsive.
    • Radio buttons and autocomplete not working well together. Autocomplete options disappear after radio is selected.
  • Visual
    • Visual shapes not always forming recognisable letters.
    • Search box overlapping with letter, making some harder to recognise.
  • Communicative/conceptual
    • Message not clear. Comparing the results and critically thinking about them was only possible after reading the blog post.

Findings and Next Steps

The visual and interactive problems pointed out by the users tests were addressed in the last iteration of this prototype. Their solution was mostly technical and involved fixing some browser compatibility problems.

The communicative and conceptual problems, though, demand deeper changes for future prototypes. The interface is familiar enough to interact with, but its presumed neutrality does not seem to encourage comparison, on an immediate level, nor reflection as an ultimate goal. As for the entertaining purposes of this work, the lack of feedback is not conclusive.

Nevertheless, the technical aspects of this prototype point out to some interesting aspects of how the autocomplete works. An exploration of the letters for each service, Images and Web, reveals significant differences. The former is mainly composed of celebrities, as the latter seems dominated by companies. A more explicit comparison between the two could lead to more engaging interactions.

Because comparison and communication seem to be the main problems to address, these are possible next steps:

  • Compare different systems — Google services? Various search engines?
  • Iterate on ways to make comparisons visual.
  • Sketch visual solutions that to concisely communicate the goals.

Bibliography

“$.getImageData.” 2014. Accessed October 20. http://www.maxnov.com/getimagedata/.
“Aaron Swartz’s Art: What Does Your Google Image Search Look Like in Other Countries? Motherboard.”

2014. Accessed October 14. http://motherboard.vice.com/blog/aaron-swartzs-art.
“About — the Publishing Lab.” 2014. Accessed October 11. http://the-publishing-lab.com/about/.

“Cross-Origin Resource Sharing – Wikipedia, the Free Encyclopedia.” 2014. Accessed October 23. http: //en.wikipedia.org/wiki/Cross-origin_resource_sharing.

“Custom Search — Google Developers.” 2014. Accessed October 17. https://developers.google.com/ custom-search/.

“Developer’s Guide – Google Web Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https://developers.google.com/web-search/docs/.

“Google Autocomplete ‘API’ Shreyas Chand.” 2014. Accessed October 17. http://shreyaschand.com/blog/ 2013/01/03/google-autocomplete-api/.

“Google Image Search API (Deprecated) — Google Developers.” 2014. Accessed October 19. https:// developers.google.com/image-search/.

“Google Poetics.” 2014. Accessed October 22. http://www.googlepoetics.com/.

“Google’s ABC.” 2014. Accessed October 25. http://54.204.173.108/parsons/thesis_1/google_alphabet/.

“Google’s Autocomplete Spells Out Our Darkest Thoughts Arwa Mahdawi Comment Is Free The- guardian.com.” 2014. Accessed October 22. http://www.theguardian.com/commentisfree/2013/oct/22/ google-autocomplete-un-women-ad-discrimination-algorithms.

Krug, Steve. 2005. Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd Edition. 2nd edition. Berkeley, Calif: New Riders.

“loadImage() Can No Longer Specify Extension with Second Parameter. · Issue #171 · Lmccart/p5.js.” 2014. Accessed October 23. https://github.com/lmccart/p5.js/issues/171.

“P5.js.” 2014. Accessed October 23. http://p5js.org/.
Paul, Christiane. 2007. The Database as System and Cultural Form: Anatomies of Cultural Narratives. na.

http://visualizinginfo.pbworks.com/f/db-system-culturalform.pdf. 12

“The Dumpster: A Visualization of Romantic Breakups for 2005.” 2014. Accessed October 22. http: //artport.whitney.org/commissions/thedumpster/.

“[Thesis 1] Google ABC – Technical Module Prototype Gabriel.” 2014. Accessed October 25. http:// gabrielmfadt.wordpress.com/2014/10/20/thesis-1-google-abc-technical-module-prototype/.

“Top 15 Most Popular Search Engines October 2014.” 2014. Accessed October 22. http://www.ebizmba.com/ articles/search-engines.

“We Feel Fine / by Jonathan Harris and Sep Kamvar.” 2014. Accessed October 22. http://wefeelfine.org/. Whitelaw, Mitchell. 2014. “FCJ-067 Art Against Information: Case Studies in Data Practice.” Accessed

September 22. http://eleven.fibreculturejournal.org/fcj-067-art-against-information-case-studies-in-data-practice/.

[Thesis 1] Google ABC – Technical Module Prototype

B is for Beyonce, according to Google images
B is for Beyonce, according to Google images

This prototype is still under development, but a functional version can be accessed here.

I’m trying to build a sort of Google ABC — an analogy between alphabet books and Google’s autocomplete engine. The idea is make people reflect about how this system works.
Besides, Google has been our default choice when searching for knowledge for the past years. Choosing an alphabet book as an analogy is a way of commenting on this.

This work is not finished yet, but here’s the process so far:

  1. The old Google Web Search API and Google Images Search API have been deprecated as of 2010 and 2011, respectively. Though still functioning, they will stop operating soon and users are encouraged to replace them by the new Google Custom Search API. I started by implementing the search box using the new engine.
    However, this API is actually intended to perform custom searches inside websites. The purpose of this project, though, was to search from the entire web using the Google Search engine. This is possible through a laborious process: by setting up a custom search with any given website, then editting your search to include results form the entire web, and removing the website you setup at first.
    The main problem though was that the autocomplete did not seem to work for the “entire web” option. To sum up, it does not mimic the regular Google Search as I expected.
  2. I found a Google Autocomplete “API.” It allows access to Google’s autocomplete results. However, it seems to be an engine developed for internal use, not really an API. There is no official documentation from Google.
    After a few more searches, I found a jQuery plugin that allows for searches using various Google services, all through the same non-documented API.
  3. Integrated the autocomplete plugin with the Google Custom Search bar.

    Google Custom Search default layout
    Google Custom Search default layout
  4. The new custom search is based on simple html tags and does not require javaScript knowledge. However, that makes it also harder to customize. Results can only be displayed according to a predefined set of layouts.
    The solution was to scrape the data from the results, erase Google’s injected html elements, and store the pictures sources only.

    Google Custom Search Layouts
  5. I started working on a layout to display the results. My idea was to form the initials using images. To do so, I’d have to grab the letter typed by the user, draw it on a canvas and get the pixel results. I made a short Processing sketch to test the method first.

    Processing sketch to draw pixelated fonts.
  6. Because this small experiment seem to work fine, I decided to implement the page display using P5.js, a javaScript library that utilizes Processing syntax. The process turned out to be laborious, though. There seem to be some flaws in the current implementation of the pixel manipulation methods in P5. Due to browser restrictions — see cross-origin resource sharing and developer Lauren McCarthy’s comments on the P5 Github repository — the images had to be displayed as regular html elements.

Code on Github.

[Thesis 1] Design Brief #2

Thesis Question

In spite of current improvements in technology, data visualization continues to use techniques that date to its origins. Digital interfaces might have changed the way we interact with data, making it possible to filter, zoom-in, and access details of it. However, the basic principles to display data are still based on reductions to geometric shapes, like rectangles, circles, and lines.

When working with cultural data, this gap between representation and artifact might be particularly problematic. Does a bar chart of the most used words of a book reveal more than the book itself? Can a pie chart of the colors of a painting tell us more about a painting than the actual image? Or is it possible to balance both representations? If so, can modern technologies create new visualization techniques to bridge this gap between representation and artifact?

Research

In his paper “Visualizing Vertov,” [@manovich_visualizing_2013] Lev Manovich analyses the work of Russian filmmaker Dziga Vertov using methods from his Software Studies Initiative. The research intends to show alternative ways to visualize media collections and focuses on the movies The Eleventh Year (1928) and Man With a Moving Camera (1929). It utilizes digital copies of Vertov’s movies provided by the Austrian Film Museum, in Vienna. They are the media source from which movie frames are taken, visualized and analysed, using custom software. Also, they provide metadata for the research: frame numbers, shot properties and manual annotations. A secondary source utilised by Manovich is Cinemetrics, a crowdsourced database of shot lengths from more than 10000 movies.

As for the visual representations, traditional techniques, like bars and scatterplots, are complemented by Manovich’s “direct visualization“ approach. In this method, “the data is reorganized into a new visual representation that preserves its original form.” [@manovich_what_2010]

Manovich describes his own method for analysis as analogous to how Google Earth operates: from a “bird’s eye” view of a large collection of movies to a zoom into the details of a single frame. The summary of those steps is as follows:

vertov_01

  • Panorama
    • ‘Birds-eye’ view: 20th Century movies compared by ASL (average shot length). Technique: scatterplot.
    • Timeline of mean shot length of all Russian films in the Cinemetrics database. Technique: line chart.
    • Movies from Vertov and Eisenstein compared to other 20th century movies. Technique: scatterplot.

vertov_02

  • Shot length
    • The Eleventh Year and Man With a Moving Camera compared by shot length. Technique: bars, bubbles, and rectangles.
    • Zoom-in of the same visualisations.

vertov_03

  • Shot
    • Each of 654 shots in The Eleventh Year, represented by its second frame. Technique: direct visualization.
    • Shots rearranged based on content (close-ups of faces). Technique: direct visualization.
    • Shots and their length. Technique: direct visualization and bar chart.

vertov_04

  • Frame
    • First and last frames from each shot compared. Technique: direct visualization.
    • Average amount of visual change in each shot. Technique: direct visualization (2nd frame) and bar chart.
    • Average amount of visual change in each shot. Technique: direct visualization (juxtaposition) and bar chart.
    • Frames organised by visual property. Technique: direct visualization.

Also, Manovich uses contextual notes to draw conclusions from the visualisations. His findings are often compared to facts from the history of cinema, Vertov’s early manifestos, and previous studies, confirming or contradicting them.

Project Concept

This prototype will compare two movies utilising some of the methods from “Visualizing Vertov.” It will combine traditional visualization techniques — charts using 2d primitives — and the “direct visualization” approach.

As for the data, it will use a specific component from films: sound. Because Manovich’s research relies largely on visual artefacts, using sound in this prototype might reveal limitations of his method or point out to new ways of applying it.

Besides, the Cinemetrics project focuses exclusively on a single aspect of movies: ”In verse studies, scholars count syllables, feet and stresses; in film studies, we time shots.” [@cinemetrics] This approach seem to underestimate other quantifiable data that could be used in movie studies.

In spite of using sound instead of time or visuals, this prototype will keep the analytical aspect of Cinemetrics and “Visualizing Vertov.” Then, it will draw conclusions on this approach compared to the supercut method.

To sum up, these are the questions raised by this prototype:

  • Is it possible to apply the “direct visualization“ technique to a non-visual artifact?
  • Which patterns and insights can a sound analysis of a movie reveal as compared to a visual analysis?
  • What are the results of an analytical method as compared to the supercut technique?

Research Methodology

Media

All levels of analysis in “Visualizing Vertov” — movies, shot lengths, shots, and frames — utilise comparisons.This device is largely employed in traditional data visualization, and seems to be even more useful for cultural artefacts. In Vertov’s case, for instance, the shot length measurement would not provide any insight if it was not compared to the Russian avant-garde or the 20th Century movies in general.

Following the same logics, this prototype takes an European movie, Wings of Desire (Der Himmel über Berlin, 1987), by German filmmaker Win Wenders, and its American remake, City of Angels (1998), directed by Brad Silberling.

Data

The following diagram shows the development steps for this prototype:

methodology_diagram

The digital copies of the movies utilised in the first step were not high quality ones. Also, the process by which the data was gathered do not preserve a high resolution sample. Those are limitations of this prototype, which focused on a rapid technique to extract and compare sound data. They will affect the visualization decisions as well.

Visualization

The data exported as XML was read and visualised using D3, a javaScript library for data visualization. D3 provides a fast and reliable way to parse and represent large amounts of data in web documents. Web pages are also able to natively embed media such as sound. Those are the reasons why the final result of this prototype is a web page.

First Iteration

The first iteration of this prototype is a simple area chart depicting the sound variations from the movies. Because of the low quality of the sources, it utilizes a relative scale for each movie: the higher value of each film is represented as the higher point of each scale, and all other values are relative to that.

For this reason, the peaks from each movie might differ in absolute value of decibels. In conclusion, this parameter should not be used for comparison.

prototype_01

Some visual disparities seem to appear in this first iteration: longer areas with high volume in Wings of Desire versus constant medium-volume ones in City of Angels.

However, the sound volume does not seem to provide many insights. Are these variations due to background music? Which patterns represent dialogues? The representation is so highly encoded that leaves no way to answer these questions.

Second Iteration

In order to add some more clarity, the second prototype includes visual marks to represent the parts of the movies when dialogues occur. A computational method to differentiate music from speech would be laborious and not reliable. The solution was to use part of the first prototype developed for this project, which parses subtitles files into different formats. The software generated JSON files with timestamps of the subtitles. Like the sound data, they were read and visualised using D3.

prototype_02

This new iteration seems to shed some light on how the movies compare. While City of Angels shows a constant use of dialogues, the subtitle marks in Wings of Desire have long blanks in several parts of the movie.

prototype_02a

The presence of the red marks also help us understand the sound representation. By comparing the two, it is possible to see that the alternating medium-low volume represents dialogues, while the constant and higher areas might indicate background music.

prototype_02b

Even though this iteration offers more insights about the movies, most of them are not yet verifiable. The tool does not let us access the sound itself.

Third Iteration

The last iteration of this prototype embeds the sound extracted from the movies into the page. It is controlled by a slider that matches the horizontal dimension of the charts.

This final representation is analogous to the shot length one in “Visualizing Vertov.” It combines a 2D chart with the artefact itself, trying to bridge the gap between the two.

prototype_03

Findings and Next Steps

Though the third iteration of the prototype includes the sound, it does not achieve the same results as the display of frames in “Visualizing Vertov.” The access to the sound is mediated through a GUI element, which creates an extra step. On one hand, the overview of the “media collection” (sound fragments) is only accessible through the chart. And on the other hand, the access to the sound through the player does not provide an overview, since it is not possible to listen to all the sound fragments at the same time. As opposed to what happens in Manovich’s visualizations, those two representations were not combined.

Nevertheless, the sound analysis does seem to provide some useful insights for movie studies. Even though this rough prototype relies on low-res sources and the support of subtitle files, yet it reveals some interesting patterns. An analysis of other sound components might include some more interesting findings.

At last, this prototype shows a radically different approach compared to the supercut technique. The analytical method of translating time to space and reducing sound to visuals have a very less entertaining result. In conclusion, the next steps for this project are:

  • Find out the communication purposes of it — data visualization as an analytical tool? A media? A language?
  • Continue the explorations of media collections using a “direct visualization” approach.
  • Expand this approach, as much as possible, to media and representations not yet explored by the Software Studies Initiative.

Bibliography

“Cinemetrics – About.” 2014. Accessed October 10. http://www.cinemetrics.lv/ index.php.

Manovich, Lev. 2010. “What Is Visualization?” Paj:The Journal of the Initiative for Digital Humanities, Media, and Culture 2 (1). https://journals.tdl.org/paj/ index.php/paj/article/view/19.

———. 2013. “Visualizing Vertov.” Russian Journal of Communication 5 (1): 44–55. doi:10.1080/19409419.2013.775546.

[sims 2014] Ribbon drawing prototype (midterm)

Screen Shot 2014-10-06 at 11.18.33 PM

Screen Shot 2014-10-06 at 11.20.42 PM

I’m trying to make this drawing tool that generates ribbons. The final goal is to allow people to draw some interesting things — I can only think of calligraphic shapes, but anyway. And, hopefully, they can apply some physics effects and export everything as a video.
This tool has ONE user in mind: Laura. She needs it to make some letterings.

Everything on the class github.