Magicsheet logo

Investments in 2016

Medium
91.9%
Updated 6/1/2025

Investments in 2016

1. What is this problem about?

The Investments in 2016 interview question is a SQL filtering and summation task. You are given an Insurance table with investment values for 2015 and 2016, along with location coordinates (lat, lon). You need to sum the 2016 investment values for policyholders who meet two criteria:

  1. They have the same TIV_2015 value as at least one other policyholder.
  2. They are at a unique location (no other policyholder has the same lat, lon pair).

2. Why is this asked in interviews?

Companies like Meta and Amazon ask the Investments in 2016 coding problem to test a candidate's ability to use Subqueries or Window Functions for complex filtering. It evaluation whether you can identify "duplicates" in one dimension while ensuring "uniqueness" in another. This is a core Database interview pattern for data reconciliation.

3. Algorithmic pattern used

This problem uses Multiple Subquery Filtering.

  1. Criterion 1 (Non-unique 2015 value): WHERE TIV_2015 IN (SELECT TIV_2015 FROM Insurance GROUP BY TIV_2015 HAVING COUNT(*) > 1).
  2. Criterion 2 (Unique Location): AND (lat, lon) IN (SELECT lat, lon FROM Insurance GROUP BY lat, lon HAVING COUNT(*) = 1).
  3. Aggregation: Finally, apply SUM(TIV_2016) to the filtered rows.

4. Example explanation

  • Policy A: 2015=10, 2016=200, Loc=(1,1)
  • Policy B: 2015=10, 2016=300, Loc=(2,2)
  • Policy C: 2015=20, 2016=400, Loc=(1,1)
  1. Policy A and B share 2015=10. Both pass criteria 1.
  2. Policy A shares location (1,1) with C. Policy A fails criteria 2.
  3. Policy B has unique location (2,2). Policy B passes criteria 2.
  4. Policy C doesn't share 2015 value with anyone. Fails criteria 1. Result: Sum is just Policy B's 2016 value (300).

5. Common mistakes candidates make

  • Single Criterion: Only checking one of the two conditions.
  • Round-off errors: Forgetting to round the final sum to two decimal places if requested.
  • Efficiency: Using complex joins when IN (SELECT ... GROUP BY ...) is much more direct and readable.

6. Interview preparation tip

Get comfortable with the GROUP BY ... HAVING pattern. It is the most robust way to find duplicates (count > 1) or unique items (count = 1) in a relational dataset.

Similar Questions