Image by Martin Adams on Unsplash
What comes to mind when you hear the term “computer vision” (CV)?
Right around a year ago I was working on a project that involved a convolutional neural network, believing that this algorithm alone makes up a large portion of what can be called CV. At the time, I was using the TensorFlow library, which is one of the most extensive deep learning libraries available. In contrast, I would find that the majority of my high school’s computer vision class is filled with basic algorithms in regular old C++.
A library is inherently a tool allowing someone to use techniques without the need to understand all of their intricacies. A simple circle detection algorithm that would take at least dozens of lines of code and hours of tweaking and debugging can be implemented in just a few lines with higher speed and accuracy by using a library. The benefit is obvious: you get to do very complicated work without the need to deeply understand it, saving a lot of time and effort. The drawback is also clear: you become dependent on what is essentially a black box to do your most complicated work.
There are many tutorials online that promise to teach you how to use certain libraries. Some “coding bootcamps” will take it a step further by working specifically with the goal of getting you hired for a job. The kinds of skills you can learn from these sources are arguably quite shallow, as they focus on covering a wide range of common skills rather than learning anything in depth. Innovations are only possible with a deep understanding of very specific topics, because they give you the context to use old skills in new ways. A library inherently limits this by providing a simpler interface to a restricted list of functions.
The ideal type of skills to focus on is largely dependent on the kind of work you wish to do. A research professor of artificial intelligence, for example, should definitely be well-versed in algorithms and high level math because their career depends on being able to make advances at this level. In other words, the job of an academic is to know everything about nothing. A web developer, on the other hand, needs to be able to learn a large number of libraries very quickly to meet the criteria of their demanding employers. In the vast majority of cases, it is utterly unnecessary to work at a lower level on the fairly standardized internet. Taking it further, neither the researcher nor the web developer would benefit much from spending an extensive amount of time to learn how to write a compiler using assembly. Thus, the danger of a library is not necessarily in not understanding how it works; rather, it is that it can deceive you into believing that you already understand it, or that simpler computer vision algorithms simply do not exist; just because you are able to use it.
Entering the world of competitive programming can be an exciting moment. The possibility of being awarded for a skill you have honed in on for years is incredibly intriguing, but at the same time, it is the beginning of your competitive programming career, and as always, there are a couple of novice mistakes to be made.
The most difficult part of a good programming project is coming up with a good idea in the first place. Why? Because millions of people know how to code and some of them are very good at it, and there are countless ways to efficiently learn how to code but any tips for coming up with ideas are inevitably vague.
Computer Science originated with the birth of the first electronic computer in the 1940’s. Prominent coding languages like Java did not exist at the time, requiring programmers to code in Binary or other complex languages such as UNIVAC Short Code.