Transboundary E-wasteMain MenuIntroduction: a map of the map.An introductory page for users after the landing page.Defining a starting point for the controversy map.A description of how we obtained a floating statement for the controversy map.Mapping the controversy on the web.A path containing the movements through the web corpus.Mapping the controversy on the scholarly web.A path leading users through the controversy as it can be traced in the scholalrly literature.Key findings.A short summary of key findings with links to appropriate parts of the map.Procedures for mapping the wild web.A path through the procedures we used to map the wild web.Procedures for mapping the scholarly web.A path through the procedures used to map the scholarly web.References, further reading, and tools.A page offering a list of suggested further reading and descriptions of main tools used in this controversy map.Josh Lepawsky31444794f29f45991a28c6c997946216e765688eJohn-Michael Davisf787e14b50e5a81b5a0cddeca64901018c933909Donny Persaud113ae967bd2d3037d2982353d771c6ad48515166Grace Akesebb4c76b563d1dcb8fc6851361486b801fce50755Liwen Chen0afa93a5fb126f8db135c704ec2d04b9f33ea134
Procedure for moving from actors to networks to locations.
12017-02-13T09:12:00-08:00Josh Lepawsky31444794f29f45991a28c6c997946216e765688e68267How we used Hyphe to explore the network of actors.plain2017-12-06T04:56:24-08:00Josh Lepawsky31444794f29f45991a28c6c997946216e765688eThe interconnectivity of actors in the corpus was analyzed using “Hyphe”, a web-based curation tool that identifies links between websites. The websites analyzed in Hyphe were selected based on the source document URLs that hosted quotes in the “index of issues”, which were derived from the “Concordance of Disagreement” terms in the “statements to debates” movement.
The source document URL from each of the 104 quotes composing the index of issues was recorded and listed into a spreadsheet. Duplicate URLs were identified in the spreadsheet and eliminated from the list using the “duplicate values” tool in Microsoft Excel, leaving a total of 71 URLs. The spreadsheet was then saved as a “.csv” file and imported into Hyphe.
Using the default settings in Hyphe, we then crawled each website (what Hyphe terms “web entities”) in the list, only one web entity was a deadlink that Hyphe could not crawl, thus 70 web entities were crawled. The output produced a list of web entities (458 discovered in addition to the 70 original web entities, totaling 528) and the amount of times each discovered web entity was cited. Each discovered web entity was manually curated to remove irrelevant web entities to the “e-waste debate” (e.g. advertisements, social media, search engines, etc.) and produce a more manageable network.
The network of web entities curated in Hyphe was exported and the geolocation of each entity's IP address was identified using the Digital Methods Initiative's GeoIP tool. Note that the geographic locations of IP addresses refer to the location of web servers hosting given IP addresses. The location of the server and the location of the organization associated with the website/IP address are not always identical. IP addresses with location coordinates (latitude/longitude) were imported to Google maps for display.
The web entity file with geolocated IP addresses was then imported into Gephi, a free and open source, network analysis and visualization software. Gephi enables a variety of steps for visualization including coloring nodes and exporting the graph file in an interactive graphic using the Sigma.js plugin.