Need for an Enhanced Methodology for Comprehensive Analysis and Benchmarking of Modern Deep Learning Ecosystems
Authors: Vishakha Agrawal
DOI: https://doi.org/10.5281/zenodo.14593326
Short DOI: https://doi.org/g8xmvv
Country: USA
Full-text Research PDF File:
View |
Download
Abstract: The rapid evolution of deep learning frameworks, hardware accelerators, and deployment environments has created a complex ecosystem that requires standardized benchmarking methodologies. As a result, evaluating the performance, efficiency, and scalability of deep learning systems has become increasingly challenging. Existing benchmarking practices have several limi- tations, including incomplete coverage of system configurations, inadequate consideration of practical usability factors, and a lack of consistency in evaluation metrics.The current landscape is fragmented, with various benchmarking suites and methodologies being used in isolation. This fragmentation hinders the ability to compare and contrast different deep learning systems, making it difficult to identify best practices and optimize system design. A standardized benchmarking approach is essential for advancing the field of deep learning. It would enable fair and meaningful comparisons between different systems, facilitate the identifica- tion of performance bottlenecks, and guide the development of more efficient and scalable deep learning solutions. Ultimately, the lack of a standardized benchmarking framework hinders the progress of deep learning research and development, emphasizing the need for a unified and comprehensive evaluation methodology.
Keywords: Benchmarking, ML Perf, DAWN Bench, Deep- Bench, Tensor flow, Workload Characterization
Paper Id: 231959
Published On: 2020-10-07
Published In: Volume 8, Issue 5, September-October 2020
Cite This: Need for an Enhanced Methodology for Comprehensive Analysis and Benchmarking of Modern Deep Learning Ecosystems - Vishakha Agrawal - IJIRMPS Volume 8, Issue 5, September-October 2020. DOI 10.5281/zenodo.14593326