Convolutional and Deep Neural Networks have received much attention and investment from the research community as well as industry in recent years owing to the highly accurate performance in certain classes of machine perception tasks. This coupled with the ever-increasing demand for smart systems is driving the need for continuous improvement in performance while requiring the technology to be cheaper and energy-efficient and thus
In this PhD proposal we want to go into this architecture-circuit-technology co-optimisation direction. A pipelined data-flow scheme which eliminates the need for costly local (SRAM) memory accesses during the tensor convolution execution will be used as basis. But many ways exist to project that data-flow into a cost/energy-effective architecture and circuit. We want to explore that broad search space for the context of (3+1)-D convolutions. We also want to exploit emerging technology options which support effective use of the 3rd scaling dimension. That will enable potentially strong gains in the interconnections which typically dominate the realisation of large processing networks like CNN/DNNs.
more widely available in portable/nomadic applications. A large part of these advancements have relied on the improvement of computing hardware over the last few decades and the majority of CNN/DNNs today run on high-performance computing platforms, like multi-core CPUs and GPUs. However a push exists towards reducing the cost and especially energy-efficiency. This has given rise to a new and interesting research problems that require pushing the boundaries of classical architecture design paradigms and to co-optimize them together with circuit design and technology implementation.
Required background: Electrical Engineering
Type of work: 30% architecture design, 40% circuit design, 30% implementation and simulation