Browse Source

Update the posts.

tags/2020.02.01
Break Yang 8 months ago
parent
commit
0adadcefad
7 changed files with 225 additions and 14 deletions
  1. +6
    -6
      content/home/hero.md
  2. +9
    -8
      content/post/quantile-regression.md
  3. BIN
      static/img/planck_siamese.jpg
  4. BIN
      static/posts/least_square_loss.png
  5. +90
    -0
      static/posts/least_square_loss.svg
  6. BIN
      static/posts/quantile_loss.png
  7. +120
    -0
      static/posts/quantile_loss.svg

+ 6
- 6
content/home/hero.md View File

@@ -8,7 +8,7 @@ weight = 10 # Order that this section will appear.
title = ""

# Hero image (optional). Enter filename of an image in the `static/img/` folder.
hero_media = "hero-academic.png"
hero_media = ""

[design.background]
# Apply a background color, gradient, or image.
@@ -24,11 +24,11 @@ hero_media = "hero-academic.png"
gradient_end = "#2b94c3"
# Background image.
# image = "" # Name of image in `static/img/`.
# image_darken = 0.6 # Darken the image? Range 0-1 where 0 is transparent and 1 is opaque.
# image_size = "cover" # Options are `cover` (default), `contain`, or `actual` size.
# image_position = "center" # Options include `left`, `center` (default), or `right`.
# image_parallax = true # Use a fun parallax-like fixed background effect? true/false
image = "planck_siamese.jpg" # Name of image in `static/img/`.
image_darken = 0.2 # Darken the image? Range 0-1 where 0 is transparent and 1 is opaque.
image_size = "cover" # Options are `cover` (default), `contain`, or `actual` size.
image_position = "center" # Options include `left`, `center` (default), or `right`.
image_parallax = true # Use a fun parallax-like fixed background effect? true/false
# Text color (true=light or false=dark).
text_color_light = true


+ 9
- 8
content/post/quantile-regression.md View File

@@ -1,5 +1,6 @@
---
# Documentation: https://sourcethemes.com/academic/docs/managing-content/
# Documentation: https://sourcethemes.com/academic/docs/writing-markdown-latex/

title: "Quantile Regression"
subtitle: ""
@@ -60,8 +61,7 @@ The most widely used loss function used in supervised learning (e.g. the
regression we are talking about) is mean square. Mean square loss can be written
as the sum (actually mean, but the quantity is a constant anyway) of $f_i(y) = (y - y_i)^2$.

[[./least_square_loss.png]]

![Least Square Loss](/posts/least_square_loss.png)

Let's first try to reason about the mean square loss, i.e. how does it drive the
optimization (training). Imagine you choose some parametrized function to
@@ -74,12 +74,13 @@ Quantile losses are not so different from the ordinary mean square loss. In
fact, they are just replacing the $f_i(y)$ with L1 norm and its friends (in mean
square loss, it is L2 norm).

[[./quantile_loss.png]]

![Quantile Loss](/posts/quantile_loss.png)

In the above graph, there are two examples of quantile losses.

1. $f_i(y) = |y-y_i|$, which guides the value toward the median (50-th percentile) of $y_i$
2. $f_i(y) = \begin{cases}0.1 * |y_i - y| & y < y_i\\0.9*|y - y_i| & y \geq y_i\end{cases}$, which guides the value toward the 10-th percentile of $y_i$.
2. $f_i(y) = \begin{cases}0.1 * |y_i - y| & y < y_i\\\\0.9*|y - y_i| & y \geq y_i\end{cases}$, which guides the value toward the 10-th percentile of $y_i$.

It might be hard to grasp how quatile losses can do that at the first glance. It
actually follows quite simple intuitions.
@@ -106,12 +107,12 @@ the median.

The anlysis of an arbitary quantile losses $f_i$ then follows immediately.

\[
$$
f_i(y) = \begin{cases}
p * (y_i - y) & y < y_i \\
p * (y_i - y) & y < y_i \\\\
(1 - p) * (y - y_i) & y \geq y_i
\end{cases}
\]
$$

Still, doing the same imaginary experiment with an arbitrary $y'$ moving along
the axis. In this case, we penalize differently for $y_i$ on the left side and
@@ -126,4 +127,4 @@ There are many articles talking about how to make use of such quantile losses.
And they are indeed useful. 90% and 10% percetile estimation already tell you
enough about how uncertain your estimation is.

Read [[https://towardsdatascience.com/quantile-regression-from-linear-models-to-trees-to-deep-learning-af3738b527c3][this post]] for more about the applications.
Read [this post](https://towardsdatascience.com/quantile-regression-from-linear-models-to-trees-to-deep-learning-af3738b527c3) for more about the applications.

BIN
static/img/planck_siamese.jpg View File

Before After
Width: 4032  |  Height: 3024  |  Size: 2.3MB

BIN
static/posts/least_square_loss.png View File

Before After
Width: 463  |  Height: 523  |  Size: 10KB

+ 90
- 0
static/posts/least_square_loss.svg
File diff suppressed because it is too large
View File


BIN
static/posts/quantile_loss.png View File

Before After
Width: 380  |  Height: 301  |  Size: 11KB

+ 120
- 0
static/posts/quantile_loss.svg
File diff suppressed because it is too large
View File


Loading…
Cancel
Save