Discrete Vs Analog I/O Signals Vs Digital Signal

         The first step of digitization is to sample the analog signal. Output of this process is a discrete time signal. At this point, there is no constraint on amplitude of the output signal. Figure below shows both the signals.

                           

        The next step of the digitization process is quantization. It involves approximating the sampled analog values using a finite set of values. After this step, the output signal is discrete in both time and amplitude. Amplitude of the signal takes 1 of the M possible values.


        The third step of digitization is to encode these quantized values into bits. Each amplitude level can be represented as a binary sequence. The output of this process is a digital signal that is a physical signal. It is continuous in time and amplitude of this signal is either 1 or 0.

        Digital signal can also refer to a signal that switches between discrete number of voltage levels.

What are Static Variables and functions in C ?

  • In C, functions are global by default. The “static” keyword before a function name makes it static.
  • Unlike global functions in C, access to static functions is restricted to the file where they are declared. Therefore, when we want to restrict access to functions, we make them static.
  •  Another reason for making functions static can be reuse of the same function name in other files.
  • For example, below function fun() is static.

static int fun(void)

{

  printf("I am a static function ");

}


Source : GeeksforGeeks & GeeksforGeeks

Difference between float and double in C/C+

  • Double has 2x more precision then float.
  • float is a 32 bit IEEE 754 single precision Floating Point Number1 bit for the sign, (8 bits for the exponent, and 23* for the value), i.e. float has 7 decimal digits of precision.
  • Double is a 64 bit IEEE 754 double precision Floating Point Number (1 bit for the sign, 11 bits for the exponent, and 52* bits for the value), i.e. double has 15 decimal digits of precision.

Let’s take a example(example taken from here) :

For a quadratic equation x2 – 4.0000000 x + 3.9999999 = 0, the exact roots to 10 significant digits are, r1 = 2.000316228 and r2 = 1.999683772

// C program to demonstrate 

// double and float precision values

#include <stdio.h>

#include <math.h>

// utility function which calculate roots of 

// quadratic equation using double values

void double_solve(double a, double b, double c)

{

    double d = b*b - 4.0*a*c;

    double sd = sqrt(d);

    double r1 = (-b + sd) / (2.0*a);

    double r2 = (-b - sd) / (2.0*a);

    printf("%.5f\t%.5f\n", r1, r2);

}

// utility function which calculate roots of 

// quadratic equation using float values

void float_solve(float a, float b, float c)

{

   float d = b*b - 4.0f*a*c;

    float sd = sqrtf(d);

    float r1 = (-b + sd) / (2.0f*a);

    float r2 = (-b - sd) / (2.0f*a);

    printf("%.5f\t%.5f\n", r1, r2);

}   

// driver program

int main()

{

    float fa = 1.0f;

    float fb = -4.0000000f;

    float fc = 3.9999999f;

    double da = 1.0;

    double db = -4.0000000;

    double dc = 3.9999999;

  

    printf("roots of equation x2 - 4.0000000 x + 3.9999999 = 0 are : \n");

    printf("for float values: \n");

    float_solve(fa, fb, fc);

  

    printf("for double values: \n");

    double_solve(da, db, dc);

    return 0;

}  

Output:

roots of equation x2 - 4.0000000 x + 3.9999999 = 0 are : 

for float values: 

2.00000    2.00000

for double values: 

2.00032    1.99968





Difference between Definition and Declaration ?


  •  Declaration of a variable is for informing to the compiler the following information: name of the variable, type of value it holds and the initial value if any it takes. i.e., declaration gives details about the properties of a variable. Whereas, Definition of a variable says where the variable gets stored. i.e., memory for the variable is allocated during the definition of the variable.
  • In C language definition and declaration for a variable takes place at the same time. i.e. there is no difference between declaration and definition. For example, consider the following declaration

int a;

  • Here, the information such as the variable name: a, and data type: int, which is sent to the compiler which will be stored in the data structure known as symbol table. Along with this, a memory of size 2 bytes(depending upon the type of compiler) will be allocated.
  • Suppose, if we want to only declare variables and not to define it i.e. we do not want to allocate memory, then the following declaration can be used

extern int a;

  • In this example, only the information about the variable is sent and no memory allocation is done. The above information tells the compiler that the variable a is declared now while memory for it will be defined later in the same file or in different file.
  • Declaration of a function provides the compiler the name of the function, the number and type of arguments it takes and its return type. For example, consider the following code,
int add(int, int);
  • Here, a function named add is declared with 2 arguments of type int and return type int. Memory will not be allocated at this stage.
  • Definition of the function is used for allocating memory for the function. For example consider the following function definition,

int add(int a, int b)

  {

    return (a+b);

  }

  • During this function definition, the memory for the function add will be allocated. A variable or a function can be declared any number of times but, it can be defined only once.

Content Source : GeeksforGeeks

Useful Links : Understanding “extern” keyword in C

How can I calculate the resolution of load cell?

 Let's assume that you have 5kg loadcell. 

Its output is specified for 1mV/V excitation meaning that if you have 5V excitation, you shall get 5mV output for a full-scale load of 5kgs.

Simply correlate this as 5000uV (microvolts) corresponding to 2000gm of load.

Next, your digital indicator has a full-scale input span of +/-20mV to give you a count of +/-20,000 meaning it will have a resolution of 1uV (microvolt).

        Resolution of Load Scale  = 5000 values for 5000 gm = 1gm ; 





What is Common-mode Voltage Gain ?

  • Common-mode voltage gain refers to the amplification given to signals that appear on both inputs relative to the common (typically ground).
  • You will recall from a previous discussion that a differential amplifier is designed to amplify the difference between the two voltages applied to its inputs.
  • Thus, if both inputs had +5 volts, for instance, with respect to ground, then the difference would be zero. Similarly, the output would be zero. This defines ideal behavior and is a characteristic of an ideal op amp. 
  • In a real op amp, common-mode voltages can receive some amplification and thus depart from the desired behavior. Since we are currently defining ideal characteristics you should remember that an ideal op amp has a common-mode voltage gain of zero.
  • This means the output is unaffected by voltages that are common to both inputs (i.e., no difference). Figure 1.13 further illustrates the measurement of common-mode voltage gains




What Is Static Code Analysis?

  • Static code analysis is a method of debugging by examining source code before a program is run. It’s done by analyzing a set of code against a set (or multiple sets) of coding rules. 
  • Static code analysis and static analysis are often used interchangeably, along with source code analysis. 
  • This type of analysis addresses weaknesses in source code that might lead to vulnerabilities. Of course, this may also be achieved through manual code reviews. But using automated tools is much more effective.
  • Static analysis is commonly used to comply with coding guidelines — such as MISRA. And it’s often used for complying with industry standards — such as ISO 26262.
  • Static code analysis is performed early in development, before software testing begins.

So, what’s the difference between static analysis and dynamic analysis?
  • Both types detect defects. The big difference is where they find defects in the development lifecycle.
  • Static analysis identifies defects before you run a program (e.g., between coding and unit testing).
  • Dynamic analysis identifies defects after you run a program (e.g., during unit testing). However, some coding errors might not surface during unit
    testing. So, there are defects that dynamic testing might miss that static code analysis can find.




WOW ! FRAMs as alternatives to flash memory in embedded designs

WOW ! FRAMs as alternatives to flash memory in embedded designs

 Ferroelectric random access memory (FRAM) is widely known as a non-volatile, stand-alone memory technology that has been a part of the semiconductor industry for more than a decade.

In recent years, integrated circuit manufacturers have been considering FRAM as a strong contender for embedded, non-volatile memory, as an alternative to flash technology. This article discusses key technology attributes of FRAM while exploring specific use cases that demonstrate FRAM’s advantages.

Today there are multiple memory technologies that have the potential to change the landscape of embedded processing. However, none so far have surfaced as a strong contender for replacing flash technology in microcontrollers (MCUs) until FRAM.

What is FRAM?

FRAM is non-volatile memory that has power, endurance and read/write speeds similar to commonly used static RAM (SRAM). Information stored in an FRAM cell corresponds to the state of polarization of a ferroelectric crystal that can hold its contents even after the power source is removed. This is what makes FRAM truly non-volatile. Also, since the energy required to polarize a crystal is relatively low when compared to programming a flash cell, FRAM writes are inherently lower power than flash.



Figure 1. FRAM allows for continuous ultra-low-power data logging and supports more than 150,000 years of continuous data logging (vs. less than 7 minutes with flash)
Here are a few typical applications that use microcontrollers with flash technology today. Let’s look at how leveraging FRAM-based MCUs, rather than flash-based MCUs, bring cost, energy and efficiency optimization.

Data logging:

A typical data logger application such as a temperature data logger can sample at rates anywhere between 1-1,000Hz. Now consider the write time of a single byte in flash memory is approximately 75µs.

In comparison, FRAM technology can be written to at a rate of about one byte every 125 nanoseconds. This is close to 1000 times faster than flash! Now consider the application reaches the end of a flash segment and needs to move to the next one, suddenly there is a 20 millisecond latency while waiting for a segment erase to complete.

The erase latency does not apply to FRAM as it is not required to pre-erase FRAM bytes between writes. A 20 millisecond latency every segment does not seem prohibitive until we calculate how significantly it impacts the maximum write speed. For the purpose of this discussion, consider that the block of memory being written to is 512 bytes in length. A flash memory block can be written 26 times per second including the time taken to complete an erase cycle every time 512 bytes are written. This brings us to a total speed of 13kBps [1].

In comparison, a 512 byte FRAM block can be written to at speeds greater than 8MBps [2]. Not every application requires such high write speeds, but consider if your target application was required to write only 1kB every second, the MCU with flash technology would spend 7% of the time staying active to perform the write. However an FRAM MCU would complete that task in 0.01% of the time allowing the MCU to remain in standby 99.9% of the time, providing significant power savings.

Energy harvesting
Many applications today are focused on using cleaner and greener energy, energy that is derived from natural sources such as sunlight, vibrations, heat or mechanical change. Such applications rely on small bursts of energy that provide power in short time intervals and the MCU is usually down to the wire in terms of how many lines of code can be executed before power is lost. Flash-based applications pay a premium in power, not only because of higher average power while accessing flash, but also because of higher peak power during flash write events.

This peak power is mainly due to the usage of a charge pump and can reach values of up to 7mA, making non-volatile writes virtually taboo in the energy harvesting world [1]. With FRAM there is no charge pump; therefore, no high current writes. The average power when writing to FRAM is the same as when reading from or executing out of FRAM (i.e. there is no penalty for non-volatile writes, making FRAM a truly flexible option for energy harvesting applications .)



Figure 2. All-in-one: FRAM microcontrollers deliver maximum read, write, power and memory benefits

RFID Tags:

Radio frequency identification (RFID) tags are making an appearance in many places: store shelves for displaying prices, name badges at conferences and on industrial automation floors to mark and identify objects on a conveyor belt. Some of these applications require memory writes up to 100 times a day.

Consider a byte of flash memory with a typical endurance of 10K write/erase cycles. To achieve 100K write/erase cycle endurance, the application will have to set aside 10 bytes of flash memory for every one byte of data, meeting the endurance requirements at the cost of high redundancy.

In comparison, an FRAM memory byte can endure 1015 write/erase cycles – 100 billion times more than a flash byte [3]. For applications that require high endurance in the order of millions of write/erase cycles, FRAM’s endurance specification is unmatched by other embedded non-volatile memory technologies available today.

Handheld metering:

Blood glucose metering is one example where loss of power is highly critical. In the case of power failure due to a depleted battery, the meter is required to save a time stamp, store the readings at the time of failure and perhaps even perform a few math functions before shutting down.

Consider a flash-based metering application with a battery that is depleted of charge, the power drop can be approximated to about 300mV in 0.01 seconds. In this time, up to 80K FRAM bytes can be written compared to about 8K bytes in flash. However this is without factoring in the high peak and average current requirements of a flash write, which will drain the battery rapidly, bringing down the backup capability significantly.

Another use case of system backup in power fail events is in energy metering where the energy reading needs to be preserved in non-volatile memory until power is restored. In such cases, the power usage during system backup is critical as backup battery sources are expected to last up to 10 years.

The list of applications where FRAM not only provides differentiation, but may also be the only viable option, is as diverse as is vast. To test drive an FRAM-based MCU check out the MSP430FR57xx series from Texas Instruments Incorporated (TI). Samples can be obtained for free and the MSP-EXP430FR5739 FRAM experimenter’s board is available online for $29.

FRAM can lower system cost, increase system efficiency and reduce complexity while being significantly lower power than flash. If your existing flash-based MCU application has energy, write speed, endurance or power fail backup constraints it may be time to make the switch to FRAM.

References:

What is Proof of concept POC ?

     


  • A Proof of Concept (POC) is a small exercise to test the design idea or assumption. The main purpose of developing a POC is to demonstrate the functionality and to verify a certain concept or theory that can be achieved in development. 
  • Prototyping is a valuable exercise that allows the innovator to visualize how the product will function, it is a working interactive model of the end product that gives an idea of the design, navigation and layout.  While a POC shows that a product or feature can be developed, a prototype shows How it will be developed.
  • Viability & Usability
  • While a POC is designed purely to verify the functionality of a single or a set of concepts to be unified into other systems.
  • The usability of it the real world is not even taken into consideration when creating a proof of concept because integration with technologies is not only time-consuming, but also might weaken the ability to determine if the principle concept is viable.This exercise is to identify the product features before jumping into development. 
  • A prototype is a first attempt at making a working model that might be real-world usable. Things go wrong in the process, but identifying these errors and stumbling blocks is principle purpose of building a prototype.
  • A prototype has almost all the functionalities of the end product, but will generally not be as efficient, artistically designed, or durable.
  • The POC method allows sharing internal knowledge among the team, explore emerging technologies, and provide an evidence of concept to the client for their product.
  • First, the developer assigned to the POC conducts research and begins to develop the feature with the goal of proving that it’s feasible. Once this is proven, the POC is extended to develop an integrated working model to provide a snippet of the final product. After that it’s either presented to the client and the product team to sell the idea for an upcoming project or it can be used internally within the development teams to share knowledge and stimulate innovation.
  • Prototyping is a quick and effective way of bringing a client’s ideas to life and serves a sample for the potential users to evaluate, test and share their feedback to make improvements. 
  • The final POC does not have to be bug-free but should ultimately show the functionality of the concept.
  • In conclusion, proof of concept says that it can be developed and validates the technical feasibility whereas and a prototype shows a potentially buggy, unrefined attempt at the final product. 

Credits : Entrepreneur

What are the difference between C and Objective C ?

What are the difference between C and Objective C ?

    The language C was developed in early 1970s by Dennis Ritchie for the UNIX Operating system. It a general purpose, procedural programming language. The language is used for developing system applications as well as desktop applications.

    Objective C was developed in early 1980s by Brad Cox and Tom Love. It is an object-oriented, general purpose language and was created with the vision of providing small talk-style messaging to the C programming language. This language allows the users to define a protocol by declaring the classes and the data members can be made public, private and protected. This language was used at Apple for iOS and OS X operating systems. Swift language was developed at Apple in 2014 to replace this language. But still there are plenty of companies that are maintaining their legacy apps which are written in objective C.

    The main difference in C and Objective C is that C is a procedure programming language which doesn’t support the concepts of objects and classes and Objective C is Object-oriented language which contains the concept of both procedural and object-oriented programming languages.

Source: GeeksforGeeks

What are Procedural and Object Oriented Programming ?

 What are Procedural and Object Oriented Programming ?

Procedural Programming:

Procedural Programming can be defined as a programming model which is derived from structured programming, based upon the concept of calling procedure. Procedures, also known as routines, subroutines or functions, simply consist of a series of computational steps to be carried out. During a program’s execution, any given procedure might be called at any point, including by other procedures or itself.


Languages used in Procedural Programming:

FORTRAN, ALGOL, COBOL, 

BASIC, Pascal and C. 


Object Oriented Programming:

Object oriented programming can be defined as a programming model which is based upon the concept of objects. Objects contain data in the form of attributes and code in the form of methods. In object oriented programming, computer programs are designed using the concept of objects that interact with real world. Object oriented programming languages are various but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types.


Languages used in Object Oriented Programming:

Java, C++, C#, Python, 

PHP, JavaScript, Ruby, Perl, 

Objective-C, Dart, Swift, Scala. 


Source : GeeksforGeeks

What is Scavenging in Two-Stroke Engines ?

     The process of simultaneously purging exhaust gas and filling the cylinder with fresh charge for a new cycle is referred to as scavenging. 

    The main scavenging methods are 

  1. Cross scavenging, 
  2.  Loop scavenging and
  3.  Uniflow scavenging. 

    The gas exchange process in two-stroke engines can be characterized with a number of parameters including delivery ratio, scavenge ratio, scavenge efficiency, purity of charge and trapping efficiency.



Source : DieselNet

What actually inside your Car Steering?

 The chances are your car has Rack and pinion steering. It’s been an incredibly popular engineering choice for years, but have you ever stopped to ponder exactly how it works?

Thankfully, the basics aren’t hard to grasp at all: it’s all about turning rotational motion into linear. When you turn the steering wheel, this turns a steering column, which rotates the attached steering shaft and a worm gear known as the pinion. This gear sits on the ‘rack’, a length of metal with a series of teeth cut into it. So as the pinion rotates, the rack moves either left or right, depending on your steering input.







Source : CarThrottle



Low Voltage Vs High Voltage Programming

Low Voltage Vs High Voltage Programming

There are two modes for programming a PIC® Microcontroller, High Voltage (HV) mode, and Low Voltage (LV) mode. The Low-Voltage Programming (LVP) mode allows the PIC Flash MCUs to be programmed using the operating voltage VDD of the device. This offers many advantages to In-Circuit Serial Programming™ (ICSP) designs.

Types of States in Digital Output

 A three-state, or Tri-State, output has three electrical states: One, zero, and "Hi-Z," or "open." The hi-Z state is a high-impedance state in which the output is disconnected, leaving the signal open, to be driven by another device (or to be pulled up or down by a resistor provided to prevent an undefined state).

High-impedance schemes such as three-state are commonly used for a bus, in which several devices can be selected to drive the bus. 

An open-drain or open-collector output pin is driven by a single transistor, which pulls the pin to only one voltage (generally, to ground). When the output device is off, the pin is left floating (open, or hi-z). A common example is an n-channel transistor which pulls the signal to ground when the transistor is on or leaves it open when the transistor is off.

Open-drain refers to such a circuit implemented in FET technologies because the transistor's drain terminal is connected to the output; open-collector means a bipolar transistor's collector is performing the function.

When the transistor is off, the signal can be driven by another device or it can be pulled up or down by a resistor. The resistor prevents an undefined, floating state. (See the related term, hi-z.) 

 A signal line is said to be "floating" if it is not connected to any voltage supply, ground, or ground-referenced signal source.

 Examples:

            An open-drain, high-impedance (hi-z) output when in the off (hi-z) mode

            In microcomputer systems, a data or address bus may, at times, be undriven (floating). This is permissible because control signals indicate when data is valid, so users of the bus know when the signal can be ignored. 

        One form of non-volatile memory device is achieved via floating gates. The gate of a MOSFET has no connection, allowing charge to remain indefinitely. The gate charge is changed using Fowler-Nordheim tunneling or hot-carrier injection. EPROM, EEPROM, and flash memory are examples. 




Source : Maxim



Know the difference : Sensitivity Vs Resolution

Know the difference : Sensitivity Vs Resolution

 Resolution is the smallest reading difference that is possible in a given measuring instrument. For example, take a scale, also known as ruler, one can see that 1 mm (or 0.1 cm) is the resolution of the measuring scale. For example, I measure the length and width of a book, I can measure it using a scale and say Length of the book is 30.0 cm x 18.4 cm. The resolution of the scale is 0.1 cm as it has 10 equal divisions in 1 cm.

Sensitivity on the other hand is the smallest amount of change in the measured parameter that causes a reading change equivalent to resolution of the equipment. Sensitivity is relevant when measurements are made in units other than the quantity being measured; For example a load cell which converts load to mV, has a sensitivity of mV/V Full Scale i.e. for a 1 Kg load cell with 5V excitation and the output is 3mV, then sensitivity is 3mV for 1V excitation and for 5V excitation it will be 15 mV.

The weighing scale will have a resolution of 1milligram but the component inside i.e., load cell will have a sensitivity of 3mV/V FS.

A sensor has sensitivity while the instrument which displays the final parameter has resolution. Scale for instance has resolution while it does not have sensitivity. Sensitivity is usually attached to sensors or devices which transform from one form to another.

For example, Thermocouple (which converts temperature to millivolts) has sensitivity while a thermometer has resolution.

Finally, Resolution is the smallest increment that can be read from the instrument and sensitivity is the smallest amount applied that can change the reading resolution by its least significant digit.