|
LSRV: The Large-Scale Road Validation Dataset | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Abstract
The LSRV dataset is collected and shared by the RSIDEA research group of Wuhan University, and it was built for the validation of the road detection task. The LSRV dataset contains the images from Boston and its surrounding cities in the United States, Birmingham in the United Kingdom and Shanghai in China. Compared with the public road dataset, the LSRV dataset contains images of different resolutions from different regions, which can widely verify the generalization ability of the model. 1. The LSRV dataset The LSRV dataset contains three large-scale images, which were collected from Google Earth and accurately labeled for the evaluation. Fig. 1 and Table I gives the details of the large-scale images. Among the images, the ground object distribution in the Shanghai image is quite different from that in the Boston and Birmingham images, and the buildings in Shanghai are relatively high and dense, with many narrow roads between them.
2.Download We provide two ways to download the LSRV dataset. We hope you can fill in a simple questionnaire before downloading, which will appear after clicking the download link. ● Download link: download ● Baidu Drive (extraction code: IDEA): download 3.Experiment Table 2, Table 3, and Table 4, respectively, lists the quantitative results for the Boston, Birmingham, and Shanghai images. The training set for these experiments is the DeepGlobe road dataset [1], and the methods used are described in [2], [3].
Reference 1. Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D., Raska, R., 2018. Deepglobe 2018: A challenge to parse the earth through satellite images, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, pp. 172-17209. 2. Chaurasia, A., Culurciello, E., 2017. Linknet: Exploiting encoder representations for efficient semantic segmentation, 2017 IEEE Visual Communications and Image Processing (VCIP). IEEE, pp. 1-4. 3. Lu, X., Zhong, Y., Zheng, Z., and Zhang, L., 2021. “GAMSNet: A Novel Globally Aware Road Detection Network with Multi-Scale Residual Learning.” ISPRS Journal of Photogrammetry and Remote Sensing. 4.Copyright The copyright belongs to Intelligent Data Extraction, Analysis and Applications of Remote Sensing (RSIDEA) academic research group, State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing (LIESMARS), Wuhan University. The LSRV road dataset can be used for academic purposes only and need to cite the following paper, but any commercial use is prohibited. Otherwise, RSIDEA of Wuhan University reserves the right to pursue legal responsibility. Reference: [1] Lu, X., Zhong, Y., Zheng, Z., and Zhang, L., 2021. GAMSNet: A Novel Globally Aware Road Detection Network with Multi-Scale Residual Learning. ISPRS J. Photogramm. Remote Sens. 175, 340-352. 5.Contact If you have any the problem or feedback in using LSRV road dataset, please contact: Ms. Xiaoyan Lu: luxiaoyan@whu.edu.cn Prof. Yanfei Zhong: zhongyanfei@whu.edu.cn |
|