## Convolution

### What you will learn

• Software: relatively simple constructs, but a great exercise
in constructing iterations. Visualization.
• Domain: develop an intuition for how convolution works.
And its time complexity

Input: construct the following signals of roughly the same height and width. Consider if each of these are odd or even signals.

• Single rectangular pulse centered at zero
• Single triangle centered at zero
• One period of a square wave
• Single sawtooth

### Convolve each pair of inputs

• Decide what the range of the summation should be
($-\infty$ to $\infty$?). i.e., how many steps do you need?
• Start with a single step and plot the result. Repeat for each step. Try animating it!
• How does the type of signal affect the output?

## Modulation

### What you will learn

• Software: fairly simple constructs

### Amplitude Modulation

• Construct a carrier (high frequency) and a signal, represented as a list of values
• Visualize all the three signals
• Take the modulated signal and subtract out the carrier to
recover the original signal

### Frequency Modulation – 1

• Based on the frequency of your carrier, determine
an appropriate $\Delta_f$
• Modify the carrier frequency for a range of cycles based on
the input signal
• Plot all three signals
• Recover the input signal

### Frequency Modulation – 2

• Rather than a $f_c \pm \Delta_f$ for a $1$ and $0$, use a return-to-zero scheme. So the frequency changes only if there is a change
from zero to one or vice-versa.
• Plot all three signals
• Recover the input signal

### Frequency Modulation – 3

• You could group your input data by two bits (so four levels) and use correspondingly more $\Delta_f$
• Plot all three signals
• Recover the input signal

## The Rich Man’s Square Wave Generator

### What you will learn

• Software: python, matplotlib
• Domain: a different way of looking at Fourier series

Do this in python, mainly for the visualization capabilities

### Rich man’s square wave generator

• Create a sine wave of a given frequency and amplitude. Plot it for one period
• Create a second sine wave of twice the frequency and half the amplitude as the first. Add the two cycles to the one from the previous step.
• Continue with a third, fourth, … nth sine wave of successively higher frequencies and smaller amplitudes.
• What waveform do we get as we add higher frequency components?

### Analysis

• Plot the error between your square wave and a true square wave at each step. Can you bring this down to zero? what is the limit? Can you prove this limit?

### Other combinations

• Take odd multiples of frequency and divide the amplitudes by the multiple squared
• Take odd multiples of frequency and divide the amplitudes by the multiple squared

## The Poor Man’s Square Wave Generator

### What you will learn

• Software: encapsulating functionality, parameterizing functions
• Domain: frequency and period of a square wave

### The poor man’s square wave generator

• Decide a frequency and amplitude of your desired square wave
• Create a list with low and high values (these correspond to the amplitude) and specific indices (these correspond to the frequency)
• Plot this!
• Create additional signals of multiples of this frequency, and different amplitudes
• Repeat using numpy arrays. How might you efficiently create these signals, using features of the numpy library?

## Digital Logic Simulator

### What you will learn

• Software: Depending on how you architect this, fairly sophisticated! Work with graphs, and best done using OOD.
• Domain: Digital logic circuits, and the sheer exponentiality of 2n2n

This is pretty advanced but will be pretty cool. We will build a digital circuit simulator in this exercise. We will also parallelize it using a pretty cool technique and will add timing analysis and fault simulation as well.

You could do this in python, but if you want to do the parallel simulator, you will need to use C or C++ (maybe Java?)

#### SimSim (Simple Simulator)

We will create a graph to represent the circuit in this part. To start with, we will create nodes and edges connecting nodes. The nodes will have a basic type (NOT, AND, OR, etc.). The nodes will be connected to each other to form a circuit; each node will have a specific number of inputs but can drive multiple other gates. We will have special nodes called Primary Inputs (PIs) and Primary Outputs (POs).

Each node ‘reads’ its inputs — the values on its input edges and determines its output value based on its type. It places this value on its output edge(s).

Create a circuit, apply patterns to the PIs and see if the POs match what you expect. You should be able to cycle through the truth table of your circuit to verify the output.

### FaultSim (Fault Simulator)

A simple fault model is a ‘Stuck-at-‘ model. Tie an edge to a constant 00 or 11 and see if you can detect this at the POs.

Rather than simulating all possible input values, can you figure out a minimal set that can detect all possible faults?

### TimeSim (Timing Simulator)

SimSim calculates the output values directly, it has no notion of gate and propagation delays.

Assign delays to each gate (and what the heck, to each connection as well). For example, an inverter can have a delay of  1 time units, a NAND gate of  2 and a NOR gate of 3 time units (hmm, why these three values?). Now, set up a global clock, and calculate the outputs at each time step. Compared to SimSim, the final value of POs will settle after a few time units.

And they may bounce around before settling! Can you set up a circuit so that you see glitches at the output?

#### ParSim (Parallel Simulator)

This is a unique form of parallelism!

In all of the above simulators, the values are 1 bit at each PI, gate output, and PO. The function evaluation at each gate is also a bitwise operation. However, we could instead treat each of these as a 32 (or longer!) bit int and do the same logical operations on the entire word! This will take exactly the same amount of time as before, but we’ll be running 32 tests in parallel. Do you see what I meant by calling it a unique form of parallelism?

### Generalizations

• Read in a circuit from a file
• Read in inputs and outputs from a file

## RC Circuit Analysis

### What you will learn

• Software: relatively basic constructs. Iterative solver!
• Domain: A better feel for a capacitor charging and discharging and the time constant.

### RC Circuits

Take as input values of R (in Ohms), C (in Farads), and supply voltage (in Volts). Assume that the voltage is applied to a series RC circuits at time t=0t=0, and calculate the voltage across the capacitor at different times. Plot these!

#### Variations:

• Determine how much time it will take to charge the capacitor to a target voltage.
• Vary R to see how the curve shifts. Determine the value of C so that the shift is exactly the same. Which has a larger effect?
• Repeat for discharging the capacitor, rather than charging it

Apply a square wave as input and simulate for a few cycles. Vary the period and duty cycle and see if the behavior matches your intuition

## CMOS Characterization

### What you will learn:

• Software: relatively simple implementation, plus some visualization
• Domain: improve your understanding of how a transistor works

### CMOS Characterization – 1

A CMOS transistor operates in three different regions — cutoff, saturation, and linear. The equations for these are fairly straightforward. Implement these using hard-coded values for parameters that you may need to specify, such as doping concentrations, mobility, etc.

Given input voltages, determine the region of operation and use the appropriate equation to calculate $I_{DS}$

### CMOS Characterization – 2

• Calculate $I_{DS}$ for different values of $V_{GS}$ and $V_{DS}$
• Plot $I_{DS}$ v. $V_{DS}$ for different values of  $V_{GS}$

### CMOS Characterization – 3

• Analyze the differences between N- and P-MOS transistors
• What is the impact of transistor dimensions?
• Read transistor parameters from a file and repeat for different technology nodes.

## Programming for the Electronics Engineering Student

### Motivation

I’ve had a few odd discussions with students from the Electronics branch related to software development. Broadly,

• sniff I’m in electronics, don’t expect me to dabble in ugh software
• We don’t get the subjects that CS students do, so aren’t as prepared for interviews as they are
• eh, I’m just going to do it

There’s so much to unpack here…

Firstly, software (and data structures, and algorithms, and …) is not the domain of any one branch. It is a tool to be used just like any other and a very powerful tool at that. So if you’re being snobbish about not doing any software at the altar of “electronics,” you’re just shooting yourself in the foot. Most electronics is software, and the more comfortable you are with this tool, the better of an electronics engineer you will be. Plus, software can help you understand electronics much more than you otherwise would.

Second, there are a lot of opportunities to develop software skills. The courses that you don’t get to do aren’t all that important, you can learn the core concepts by yourself. There are a few (Databases!) that you won’t get exposure to, but you haven’t really missed much. Read on!

Finally, self-study is the best way to develop these skills. Here’s the big secret:

the only way to learn how to program …. is to program

### The obvious question is: “what do I program?”

This is a series of posts that presents program suggestions, what to look for and what to focus on. If you go through these, you will be a better electronics engineer and a better software developer than 99% of your peers, irrespective of their major. I’ve gone through the typical subjects from the second year onwards, and defined assignments that (i) build on the theory that you learn (ii) expose you to different implementation concepts (iii) improve your understanding of data structures and algorithms

### One last thing: the discipline of programming

• All your implementation should have a reasonable testing strategy in place
• All your code should be instrumented to measure the performance of key parts of the code
• All C and C++ code should be compiled with -Wall
• All code should be valgrind-certified error-free
• All code should be in a version control system

I’ll talk about each of these points in a later post.

now let’s

Shut Up and Code!

Program ideas on:

## The Current Education System is Dead

I don’t think this article will tread new ground, but its an attempt at organizing my thoughts.

Observe events of the past few months. We have been forced to go from classroom lectures to online classes  – video recordings of lecturers speaking about their topics. If we have a hypothetical college in Pune with ~400 students in a batch, and therefore ~7 divisions, we suddenly realize that we do not need 7 lectures, but can do with just one. And that one lecture can be conducted by the most effective teacher. So not only is a lot of labor saved, but students get the best instruction.

Question: Why not do this as a practice, if there are benefits all around? Why subjugate students to sub-standard teaching?

Thinking further, extend this to all colleges in Pune. Get the best teacher for each topic, and we only need one lecturer and students get the best of Pune, not just the best of their college.

Question: What differentiator is a college providing to justify locking in students to only their offering?

And naturally, why limit ourselves to Pune? Why not get the best teacher in the entire world?

There are two additional factors to take into consideration.

First, evaluations. How do we give feedback to students so that they understand how much they have learned, and what they need to focus on (this is should be the primary goal of evaluations)?

Second, degrees. What purpose do they serve?

This leads me to wonder why we even have colleges and universities in today’s day and age. Companies have their own criteria for evaluating applicants, and if they use college scores at all, it is as a filter (which is pretty stupid!). There are many that conduct tests online, so it doesn’t matter where you studied, but what you know (see Hackerrank for continuous iterations of this).

So:

• I can attend courses online, learning from the best teachers in the world. In some cases, I can learn the same subject from multiple teachers, to get different perspectives, and deepen my understanding.
• I can engage with mentors who have the right research, academic and industry experience to guide the subjects that I focus on.
• I can build a portfolio of projects to demonstrate my capabilities. I can contribute to world-class open source projects so that I am gain experience working in teams large and small.

Why do I need to attend college at all?

## Scaling Challenges for MOOCs

I’ve been a massive fan of MOOCs ever since they started, and have personally leveraged the opportunity to the fullest (though it did take some time to figure out how to complete a course, versus starting one!). Students get to learn from the best educators in a variety of different subjects (sometimes they (the teachers) are also from the top universities) for free or for a nominal cost. What’s not to like?

These days Universities are moving from offering individual courses, or ‘micro-master’s’ programs to offering full-fledged online Masters Degrees. These are equivalent to what one would obtain by physically attending the college (the certificate does not distinguish between online and in-person), but this also requires a lot more rigor in evaluating students who attend online, a point that I come back to below.

I’ve seen quite a transition from a video recording of the teacher with slides/monitor on the side to attempts using Microsoft Kinect. And there’s the disconcerting trend where the instructor writes on a board facing us, but we see things the right way (I haven’t quite figured out how that is done). However, at its core, the model is unchanged: the lecturer lectures and we watch them online, rather than being physically present in the classroom. The watching is done at our convenience and pace, and we can rewind and rewatch what we want. MOOCs are making knowledge accessible, but I don’t think we’ve actually used technology to develop a new way of instruction.

I’d like to focus on a few aspects of scale — the ‘Massive’ part of MOOCs. How do these courses deal with the sheer number of students enrolled, and how do they impart the best possible instruction?

The first, obvious, observation is that there are numerous platforms available (edX, Coursera, Udemy, etc.) that take care of the bureaucratic stuff: accounts, logins, billing, tracking, etc.) and the technology (videos, tests, etc.), so these don’t have to be redeveloped for each course. And yes, there are no scaling limits here.

The second aspect of scale is that of content delivery. Lectures are recorded and available at all times. We don’t need to have all students in the same location (or online) at the same time. Since the model is different from a physical classroom, the limitations of a physical classroom are dealt away with. This is a solved problem.

The third aspect of scale is where things get interesting, and is that of student evaluation. The original MOOCs (and I suspect, most of the current generation) used to limit themselves to quizzes that consisted of multiple-choice questions. Some were embedded in the videos. These are, of course, easy to grade automatically. Others allow text boxes to enter (numerical) values, and sometimes $\LaTeX$ formatted answers, but these take a bit of getting used to. I have also seen some that require essay-type answers, these are peer evaluation, which I thought was a good way to address the problems of scale, but this seems to have gone down – a failed experiment, perhaps?

Back to the degree-granting MOOCs (can I call them that?): they decided the only way of evaluating students was the traditional way of evaluating students, namely, have them write an exam they way they normally would. So…

students turn on their webcams, rotate them around the room to prove that there is nobody around helping them, and keep these on and focused on them for the duration of the test. I believe the microphone is kept on as well. And there is a person on the other end of the connection keeping an eye on the student! Holy 19th-century non-scalable solutions, Batman!!!

This is a great opportunity for an unskilled proctor who otherwise would not have gainful employment. I have no idea how much they make, and what trauma they go through, having to spend their waking hours watching students bent over exam books.

The final aspect of scaling is that when we have essay-type questions (say, ‘develop an algorithm to do x’), a human grader is going through these and assigning scores. Even if we overload a poor TA with 100 exams, the larger the course, the correspondingly larger staff needed to manage just the grading.

I think we’ve been going about this the wrong way. We’ve been trying to replicate what we do offline to the online world, which leads to issues like the above. How might we look at this from a fresh perspective? Glad you asked!

• Goal: Provide feedback to the student on which areas they are doing well and areas for improvement. This is (or should be) the only reason to conduct exams, but that is fodder for another article. A secondary goal is to provide a normalized score that accurately captures the student’s capabilities in the subject. Ultimately, we would like to measure how much a student has learnt.
• The means of measurement have to be scalable. It should work across thousands or tens of thousands of students.
• It should not be possible (too restrictive? Maybe it should be very difficult) to game the measurement

We can think about how technology can solve these issues once we have clarity on what we are trying to solve!

Thoughts?

## Little Things

Student activities: Operate a slot machine using your eyes

This is cool and fun and took just a few days to do. BUT this could be done by using other interfaces – mouse, keyboard, sound(?), etc. What would be a good use of eye-tracking or blinking that would not be possible with existing interfaces?

## Learning by Explaining

It’s incredibly useful to have students work with a partner, rather than doing exercises alone. They start discussing approaches and ideas and have to explain their thinking to each other. This forces them to consider whether their approach is valid, and the act of talking about what they are thinking deepens their understanding as well. Bouncing ideas off of each other leads to further ideas, that would not have come about.

The most fun sessions are when the room is noisy with conversations between pairs of students, conversations across the entire class, students taking hints from each other, offering solutions to solving particularly knotty problems and egging each other on. And most times, we end up with solutions very different from what I had come up with — which is brilliant!

## Learning from mistakes

[Inspired by Dennett’s Intuition Pumps]

My current approach and where I’d like to be works 1:1 or in a small group, much more difficult to do in a lecture-style setting.

When I’m trying to teach a new concept – programming, algorithm, mathematics – I try to give students a bit of structure and context and get them to develop the solution. The amount of structure and context varies from student to student, and its not easy getting it right. Done well, students reach that aha! moment on their own. I’ve been doing this unconsciously for quite a while, but its probably time to do this with more structure (hah!).

One more aspect I would like to add is having students reflect on failed approaches. This may also require reducing the context and allowing them to try out a variety of approaches.  A lot of learning can happen in this analysis. The downside is that it needs time and it needs willingness to expend the effort.

Search