

(eds.) Proceedings of the 36th International Conference on Machine Learning. Ying, C., Klein, A., Christiansen, E., Real, E., Murphy, K., Hutter, F.: NAS-bench-101: Towards reproducible neural architecture search. AAAI Press (2015)Ĭamero, A., Toutouh, J., Alba, E.: Low-cost recurrent neural network expected performance evaluation. In: Proceedings of the 24th International Conference on Artificial Intelligence. Applied Soft Computing 107356 (2021)ĭomhan, T., Springenberg, J.T., Hutter, F.: Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. Ĭamero, A., Wang, H., Alba, E., Bäck, T.: Bayesian neural architecture search using a training-free performance metric. Proceedings of the AAAI Conference on Artificial Intelligence 33(01), 4780–4789 (2019). Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image classifier architecture search. International Conference on Learning Representations (2019) Hanxiao, L., Karen, S., Yiming, Y.: Darts: Differentiable architecture search. Ojha, V.K., Abraham, A., Snášel, V.: Metaheuristic design of feedforward neural networks: a review of two decades of research. Nature 521(7553), 436 (2015)Įlsken, T., Metzen, J.H., Hutter, F., et al.: Neural architecture search: A survey. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Pearson Upper Saddle River, Hoboken (2009) Haykin, S.: Neural Networks and Learning Machines, vol. Moreover, we analyse the distributions of solutions obtained and find that that the population provided by the data-driven initialization technique enables retrieving local optima (maxima) of high fitness and similar configurations. Our results show similar improvements on the target dataset, despite a limited training budget. Besides, we also investigate how an initial population gathered on the tabular benchmark can be used for improving search on another dataset, the So2Sat LCZ-42. The results show that compared to random and Latin hypercube sampling, the proposed initialization technique enables achieving significant long-term improvements for two of the search baselines, and sometimes in various search scenarios (various training budget). More specifically, we use NAS-Bench-101 to leverage the availability of NAS benchmarks. We benchmark our proposed approach against random and Latin hypercube sampling initialization using three population-based algorithms, namely a genetic algorithm, an evolutionary algorithm, and aging evolution, on CIFAR-10.

First, we perform a calibrated clustering analysis of the search space, and second, we extract the centroids and use them to initialize a NAS algorithm. Therefore, in this study, we propose a data-driven technique to initialize a population-based NAS algorithm. However, the literature shows that a good initial set of solutions facilitates finding the optima. Despite the great advances made, few authors have proposed to tailor initialization techniques for NAS. Algorithmic design in neural architecture search (NAS) has received a lot of attention, aiming to improve performance and reduce computational cost. on Parallel Processing, vol. 1, Silver Spring, MD: IEEE Computer Society Press, pp. 103–110. (1989), "On the Permutation Capability of a Circuit-Switched Hypercube", Proc. ^ Optimal Numberings and Isoperimetric Problems on Graphs, L.H.(1955), "Über drei kombinatorische Probleme am n-dimensionalen Wiirfel und Wiirfelgitter", Abh. Matchings extend to Hamiltonian cycles in hypercubes on Open Problem Garden. (1963), "Some complete cycles on the n-cube", Proceedings of the American Mathematical Society, American Mathematical Society, 14 (4): 640–643, doi: 10.2307/2034292, JSTOR 2034292. (2004), Across the Board: The Mathematics of Chessboard Problems, Princeton University Press, p. 68, ISBN 978-8-5. The family Q n for all n > 1 is a Lévy family of graphs Problems
