Please use this identifier to cite or link to this item: https://hdl.handle.net/10316/44075
DC FieldValueLanguage
dc.contributor.authorTracewski, Lukasz-
dc.contributor.authorBastin, Lucy-
dc.contributor.authorFonte, Cidalia C.-
dc.date.accessioned2017-10-21T21:31:45Z-
dc.date.available2017-10-21T21:31:45Z-
dc.date.issued2017-09-18-
dc.identifier.issn1993-5153por
dc.identifier.issn1009-5020por
dc.identifier.urihttps://hdl.handle.net/10316/44075-
dc.description.abstractThis paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity.por
dc.language.isoengpor
dc.rightsopenAccesspor
dc.subjectLand coverpor
dc.subjectLand usepor
dc.subjectvolunteered geographic information (VGI)por
dc.subjectphotographpor
dc.subjectconvolutional neural networkpor
dc.subjectmachine learningpor
dc.titleRepurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterizationpor
dc.typearticle-
degois.publication.firstPage252por
degois.publication.lastPage268por
degois.publication.issue3por
degois.publication.titleGeo-spatial Information Sciencepor
dc.relation.publisherversionhttp://dx.doi.org/10.1080/10095020.2017.1373955por
dc.peerreviewedyespor
dc.identifier.doi10.1080/10095020.2017.1373955por
dc.identifier.doi10.1080/10095020.2017.1373955-
degois.publication.volume20por
item.openairetypearticle-
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.grantfulltextopen-
item.fulltextCom Texto completo-
crisitem.author.researchunitINESC Coimbra – Institute for Systems Engineering and Computers at Coimbra-
crisitem.author.orcid0000-0001-9408-8100-
Appears in Collections:I&D INESCC - Artigos em Revistas Internacionais
Show simple item record

SCOPUSTM   
Citations

40
checked on Mar 25, 2024

WEB OF SCIENCETM
Citations 5

38
checked on Mar 2, 2024

Page view(s) 1

2,758
checked on Mar 26, 2024

Download(s) 50

447
checked on Mar 26, 2024

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.