From IODA Knowledge
Jump to: navigation, search



By Sriram Reddy (Sensors Without Borders)


There has been an explosion among developers of commodity Air quality sensing devices (CAQSDs) both internationally and within India, some of which are ideal for further testing and deployment. Coupled to this, there have been several calls of recent from device manufacturers, nonprofits and community organisations, and the citizens using these devices to gauge the data quality of these devices - the quality as judged from a reliability viewpoint of these devices whose two pillars are the device’s data accuracy and precision and second from a data completeness standpoint.

We, as device manufacturers and other interested parties, are attempting to set up a document around the standards process - calling it a Standards Document - by which to impartially and fairly gauge the quality of the current breed of CAQSDs.

Some of these organisations that have expressed an interest include, but are not restricted to:


This manual is an attempt to undertake an honest breakdown of the current breed of Indian sensors available in the market so consumers and other interested parties including regulators can make choices among the available sensor types for their own purposes and applications.

What if nonprofits and social enterprises had an affordable way to report real-time, large scale data on their social impact?

Organisations with interest in commodity air quality sensing devices, whether they be devices, calibration techniques, time-based databases technologies coupled with visualisation systems with several GIS elements, are under pressure to measure their performance and results.

Many low-cost, sensor-based tools already exist to help collect data on a large-scale, real-time basis. Yet, while both supply and demand for sensor-based tools exist, the community and this includes individuals, nonprofits, pollution-heavy industries, social enterprises, and even potentially regulatory organisations, often fail to take advantage of them.

The issue is access. There isn’t a central ecosystem / marketplace at which organisations can access sensor-based tools and come to understand their pros and cons as well as their applications to specific needs.

The other issue is technical language. We hope that this guide will help fill in some of the gaps in the technical language around sensor technology. We also present a perhaps more relevant view of technology in its deployment flexibility, power options, maintenance considerations, and the presence of documentation guides.

In addition to addressing these gaps, this catalogue goes a step further by providing recommendations that assist users to make decisions among available sensor devices for their various applications.

Beyond these targeted recommendations, the catalogue displays all relevant research findings so that users can draw their own comparisons.

This catalogue aims to feature the options as neatly and simply as possible so that the catalogue’s audience - individuals, research institutions, nonprofits, polluters, social enterprises, and even potentially regulatory organisations - who can understand and take action.

But such a simplification poses the risk of eliminating some of the nuances and complexities of individual tools. The result is a careful balancing of simplicity and complexity, rigour and practicality, and subjectivity and objectivity.

This field of environmental sensor technology is dynamic and fast-moving. New tools come out on the market on a regular basis. Existing tools frequently expand their features to cater to users’ needs and challenge their competitors.

Given this dynamism, the online version of this catalogue will be updated regularly.

We hope you find this catalogue useful and relevant. For any comments and feedback, please reach us [1].


  • Regulators
  • Citizens
  • Community Organisations
  • Emitters
  • Citizen Scientists


  • Technical and Non-technical parametric list (working draft) can be found here;


  • National Physical Lab Lab in Delhi;
  • CE certification.


  • CSE;
  • TERI;
  • United Nations Environment Program;
  • Asian Development Bank;
  • Premier Initiative for Asia, Manila / Delhi Office;
  • IIT Delhi (Mukesh Khare);
  • IIT Bombay Mukesh Sharma;
  • Dr Kirk Smith, Berkeley;


  • Refer to Kopernik for mobile collection tool survey

Overall Philosophy|Standards Utility

  • However, it may be that some device developers use custom algorithms for conversion from particle counts to mass concentrations for instance, beyond what is already provided by the sensor module manufacturer, and it will be mentioned as such;
  • Otherwise, we believe these results can be used a proxy of the results that will be obtained if we were to conduct the same tests;
  • However, we will be doing standard Co-location testing alongside regulatory reference sensor to determine in view of the Assessment Criteria below.
  • We will check to see if the sensor has been calibrated for high RH conditions known to be detrimental to accurate PM readings;
  • We will run these sensors through a series of indoor and outdoor tests, varying meteorological parameters along the way, to ascertain their stability, accuracy, pressions and reliability.
  • We will also investigate pricing options along with deployment flexibility.
  • See Survey Instrument below.
    • <Can bring in external M & E framework such as Preval >
  • The reference instrument for this testing will be provided by government labs (ask for lab name from Hindustan Times person).

Why Calibration?

The aim of calibration is to validate a primary instrument of unknown data quality with a gold standard or reference sensor of known quality. A comparison of the datasets between these instruments and a statistically-derived fitting process between them leading to the development of a calibration curve enables the improving the accuracy of the primary instrument by post-processing all sensed environmental data with the calibration curve.

Our Technical Approach

  • Sensor parameters such as data completeness, sensor accuracy and sensor precision will be judged during testing.
  • Time length of data comparisons
    • 1 week’s worth of data
    • 5 min, 15 min, 30 min, 1 hr, 8hr and 24hr averages will be determined against the reference sensor.
  • External :
    • Are Technology Roadmap the same across the Asian region ?
      • For instance, what generations of sensors are being used in various devices ?

Our approach with Trident calibration, composed as it is of three nova sensors, is thus.

1. Co-locate all Nova sensors for 24 hrs and obtain the individual and aggregated means and standard deviations of all Nova sensors data.

Then identify that Nova sensor whose mean and deviation that is closest to the computed aggregate mean and std deviation.

This then becomes the internal reference sensor, i.e. that sensor whose data distribution most closely resembles aggregated data distribution.

This primary internal reference sensor then becomes the sensor against which all other new Nova sensors will be compared to derive the ‘Internal calibration factor’ (ICF).

Use this primary internal reference sensor to calibrate against an external reference sensor using the same mean and std deviation of data distributions of these two sensors to obtain a linear regression curve that becomes the ‘External Calibration Factor’ (ECF).

This is an effort that may need to be repeated over time to ensure that the most appropriate ECF is being used.

The Total Calibration Factor (TCF) is then simply accounts for both ICF and the ECF and is such that


So every new Nova will have to be run through this experiment once before it is ready for field deployment in order to derive its unique ICF.

This obtained ICF along with the pre-computed ECF is then used to derive the particular Nova’s TCF.

Calibration/Validation OPERATIONALIZATION

Chamber description:

  • CFD modelling of the stable conditions room.
  • We are attempting to set up a standardised means of testing for commodify air quality sensing devices that involves two parts ;
  • The first step involves the placement of devices within a chamber with controlled initial stable conditions, and in using CFD can help us determine device locations.
  • We can repeat the experiment in the chamber to determine replicate measurements which will be useful or we can use the first set of data that has been collated as is. We need to aim to repeat at least ten such experiments over the next 3 months.
  • In analysing these data sets, and in using this calibration work to inform our traffic emissions exercise, we will have the opportunity over the next three months of this exercise to look at at least such calibration opportunities with varying conditions;
  • We should also look at doing ambient conditions with one device within an Jellyfish, and one without, to determine how the data sets vary and attempt to correlate with meterology.

Data Comparisons

We can then use the difference in data sets as the impact of meteorological conditions on accuracy (perhaps through some form of regression to account for conditions).