From Joseph Converse, a puzzle of digital manipulation:

Imagine taking a number and moving its last digit to the front. For example, 1,234 would become 4,123. What is the smallest positive integer such that when you do this, the result is exactly

doublethe original number? (For bonus points, solve this one without a computer.)

Here is the solution… **(spoiler warning!)**

Let \(N\) be the “original” number we are searching for. Write \(N = 10 m + n\), where \(n \in [1..9]\) is the last digit and \(m\) is the rest of the digits. We have \(m \in [10^{k – 1}..10^k – 1]\) for some \(k \in \mathbb{N}^+\). We want to find values satisfying

\(2 \cdot (10 m + n) = 10^k n + m\)We can rewrite this to solve for \(m\) in terms of \(k\) and \(n\):

\(20 m + 2 n = 10^k n + m\)

\(19 m = (10^k – 2) n \qquad (1)\)

Since 19 is prime and \(n \lt 19\), we can apply Euclid’s lemma to conclude that 19 divides \((10^k – 2)\). This is the same as

\(10^k \equiv 2 \quad\pmod{19}\)This is a discrete logarithm problem with solution

\(k = 17 + 18 i\) for all \(i \in \mathbb{N} \qquad (2)\)

(This implies that any solution \(N\) must have at least 17 digits. Good thing we didn’t try a brute force search for \(N\)!)

At this point we just need to pick \(i\) and \(n\), and the values for \(k\) and \(m\) will follow immediately from (1) and (2). We have to choose \(i\) and \(n\) so that the constraint \(m \in [10^{k – 1}..10^k – 1]\) is satisfied. Certainly, we will get the smallest solutions from \(i = 0\) if such solutions exist, since increasing \(i\) increases \(N\) much faster than increasing \(n\). This gives us \(k = 17\). Substituting, we have:

\(10^{16} \leq \frac{10^{17} – 2}{19} n \leq 10^{17} – 1\)The smallest \(n\) satisfying this is \(n = 2\). Putting it all together, we have

\(i = 0,\ k = 17,\ n = 2\)

\(m = \frac{10^k – 2}{19} n = 10526315789473684\)

\(N = 10 m + n = 105263157894736842\)

And indeed, \(2 N = 210526315789473684\).

]]>

Recall again the definition of the 0-1 integer programming problem:

Input: \(m\)-by-\(n\) integer matrix \(A\) and length-\(m\) integer vector \(b\)

Output: true if and only if there exists a length-\(n\) 0-1 vector \(x\) such that \(Ax = b\)

A solution vector \(x\) needs to meet two criteria: it must be contain only zeros and ones, and it must satisfy \(Ax = b\). It is easy to generate a vector that meets either of these two criteria on their own; it is only the combination of the two that makes the problem hard.

For the second criterion, generating a vector that satisfies \(Ax = b\) can be done by reducing the augmented matrix \((A|b)\) to reduced row echelon form (RREF). This gives us another system of linear equations that has the same solution set as the original, but with the coefficients in a structured format. Once the matrix is in RREF, it is easy to determine whether the system of linear equations has a solution, and if it does to substitute the remaining free parameters with arbitrary values and read off a solution vector.

If a system of linear equations over \(n\) variables has any solutions at all, its solution space is an affine subspace of the \(n\)-dimensional Euclidean space. It can be a point, a line, a plane, or in general an \(m\)-hyperplane with \(m \leq n\). This suggests the following reformulation of the 0-1 integer linear programming problem, which might be called **0-1 affine subspace intersection**:

Input: description of an affine subspace \(S\) of \(n\)-dimensional Euclidean space

Output: true if and only if \(S\) contains any corner of the unit \(n\)-hypercube

This geometric rephrasing of 0-1 integer programming emphasizes how easy this hard problem might seem at first glance. The difficulty comes from the \(n\)-hypercube having a number of corners exponential in \(n\), which means we can’t get an efficient algorithm by just individually testing every corner for membership in the subspace. Any polynomial time algorithm for this problem would have to exploit some nontrivial geometric property of the problem.

Traditionally, converting a matrix to RREF is done by Gaussian elimination. What is the time complexity of Gaussian elimination? Most sources give it as \(O(n^3)\) for a matrix with maximum side length \(n\). However, this is actually misleading: \(O(n^3)\) is the *arithmetic complexity* of Gaussian elimination. Gaussian elimination is an example of an algorithm that is weakly polynomial time, but not strongly polynomial time, meaning that it runs in polynomial time in the real RAM model but does not in general run in polynomial time when implemented on a Turing machine. The worst-case *bit complexity* of standard Gaussian elimination is actually exponential!

Fortunately, there are known algorithms with worst-case polynomial bit complexity for converting matrices of rational numbers to RREF. Even better for our purposes, there are known algorithms with worst-case polynomial bit complexity for computing a description of the set of all *integer* solutions to a system of linear equations, as discussed in a fascinating 2015 article on the Gödel’s Lost Letter blog.

As described in this 2009 article referenced from the GLL blog post, the solution space of an integer matrix equation \(AX = B\) looks like this, where all of the variables are integer matrices and \(Z\) is a matrix of free parameter integer variables:

\(X = Q \begin{bmatrix} \overline{D}\,^{-1}\,\overline{PB} \\ Z \end{bmatrix}\)This implies that the set of integer solutions to a system of linear equations is an \(m\)-dimensional lattice of integer points embedded in \(n\)-dimensional Euclidean space. This lattice can be written as the span of a basis of linearly-independent integer vectors, plus an integer affine offset from the origin.

This suggests another representation of the 0-1 integer programming problem, which might be called **0-1 lattice intersection**:

Input: set of length-\(n\) integer vectors \(U\) and length-\(n\) integer vector \(v\)

Output: true if and only if there exists a length-\(n\) 0-1 vector \(x\) such that \(x – v\) is a member of the integer lattice spanned by \(U\)

This is reminiscent of the class of lattice problems in cryptography, some of which are likewise known to be NP-hard.

]]>What computational complexity class class does this problem fall into? Can we do it in polynomial time if we take the degree \(d\) to be a fixed constant?

As it turns out, this problem is NP-hard, even for \(d = 2\). The proof is straightforward and requires no algebraic geometry or other “hard” math—you just have to know the trick.

We can reduce from the 0-1 integer linear programming problem, which is one of Karp’s original 21 NP-complete problems. 0-1 integer programming is defined as follows:

Input: \(m\)-by-\(n\) integer matrix \(A\) and length-\(m\) integer vector \(b\)

Output: true if and only if there exists a length-\(n\) 0-1 vector \(x\) such that \(Ax = b\)

We can write the matrix equation \(Ax = b\) as a set of \(m\) linear equations in \(n\) variables. This gives us a system of polynomial equations, but without further constraints we are not guaranteed to end up with a solution that is valid for the original 0-1 integer programming problem, since the variables are allowed to range over \(\mathbb{C}\) rather than being restricted to 0 or 1.

However, this is easy to fix—we just need to add additional polynomial equations that constrain each variable to be either 0 or 1. It is sufficient to add an equation \({x_i}^2 – x_i = 0\) for each variable \(x_i\), since the solution set of this equation is exactly \(x_i \in \{0, 1\}\). This gives us a system of \(m + n\) polynomial equations in \(n\) variables with maximum degree 2, where an assignment of the variables is a solution of the system if and only if it is a solution to the original integer programming problem. Since this reduction is clearly possible to perform in polynomial time, we can conclude that determining whether a system of degree-2 polynomials with integer coefficients has a solution is NP-hard.

]]>There are obvious downsides to this approach, such as the potential for false positives (good comments that are incorrectly classified as spam, perhaps due to the infamous Scunthorpe problem) as well as the high rate of false negatives (spam comments that are not recognized as such and have to be deleted manually). However, word blacklists are available as a built-in feature of WordPress, so I don’t have to use a paid subscription blog spam filtering service such as Akismet. Also, the simplicity and controllability of the approach are nice.

In the rest of this post, I will list and describe all of the string filters I use, so that other bloggers can copy them if so desired.

The single most effective set of blacklisted strings that I use is a short list of common Cyrillic characters. Since this is an English language blog but a great deal of spam is written in Russian (or pseudo-Russian gibberish), this filter is very powerful for its small size. The particular list of characters that I use is taken from an article elsewhere on the Internet which, sadly, I can no longer find. The list is as follows:

д и ж Ч Б Џ Ђ ћ Р° Ѓ

Another common language that I receive spam comments in is Japanese. Almost all Japanese text can be efficiently filtered out with this even shorter list of characters:

。 ー の

Next, we have the medications. This is a very effective filter, but unfortunately the list has to be updated frequently as the distribution of drugs being pushed in the spam I receive changes over time. Also, *cialis* cannot be included, since as Wikipedia notes, it is contained as a substring in the common word *specialist*; nor *ambien*, as it is a substring of *ambient*. The brand name *ultram* is probably safe, however, unless I start posting Warhammer 40K content.

adderall alprazolam clomid clonazepam clopidogrel diazepam doxycycline effexor ephedrine ivermectin klonopin lasix lunesta oxymorphone phentermine restoril retin a retin-a sildenafil tetracycline tramadol ultram valacyclovir valium viagra vicodin xanax zoloft zolpidem

Next up, we have distinctive phrases that occur in certain fixed spam messages that get posted over and over again. This filter is not very effective in the long run, since the particular spam messages tend to change over time, but as a short-term fix to get rid of individual really persistent spammers, it can work pretty well.

going to put you in the freezer as punishment hard to find your site in google hard to find your website in google I noticed that your On-Page SEO is is missing a few factors I noticed your site lost rank in google missing out on at least 300 visitors per day We have decided to open our POWERFUL and PRIVATE website traffic system

(Yes, the phrase “going to put you in the freezer as punishment” was actually present in a spam comment I received over and over again for a while several years ago. It’s from a joke about a guy putting his pet parrot in the freezer. Look it up if you are really desperate to know.)

Similarly, I also have a small pile of fixed URLs and website names that get spammed over and over again for a period of time. I’m not going to list them here, since including them seems likely to get this site banned from search engine results. Besides, they usually only work as filters for a short period of time before the spammers move on to greener pastures.

On the other end of the spectrum, we have common and widely-used phrases that happen to occur frequently in spam from many different sources while not being likely in legitimate comments relevant to the content on my blog. This is a particularly tricky category, since these phrases could easily occur in genuine comments if the subject matter of my blog strays too far into certain territory. Because of this, I only have two phrases of this sort blacklisted at the moment:

search engine optimization where to buy

The largest category of filtered strings that I have at the moment is types and brand names of products that the spam purports to offer for sale at cheap prices. This is another category where a level of care is required, because it would be easy to accidentally filter out a legitimate comment that just happens to mention one of these items. Here is the list I am currently using:

air jordan auto insurance babyliss burberry canada goose gucci handbag hermes high heel jerseys jimmy choo jordans lacoste louboutin louis vuitton lululemon marc jacobs michael kor michaelkor mlb jersey moncler nail art nba jersey newbalance nfl jersey nike oakley sunglasses payday loan prada ray ban uggs wholesale beads

And last but certainly not least, we have the vices. These should be reasonably safe to filter out as long as my blog doesn’t get too, uh, *spicy*.

casino erotic porn sexy

And there you have it. A few simple word filters can catch the majority of the spam comments this blog receives. Not bad for what it is.

]]>I discovered this bug when I wrote some code that compiled in Eclipse, committed it, and then got an email a few minutes later from our Jenkins continuous integration server saying that the build failed. From the error message, I managed to track it down to a specific section of code that compiled in Eclipse but gave a compile error in javac.

This isn’t the first time I’ve ran into a Java compiler or standard library bug while developing CertSAFE, nor is it the first time that I’ve submitted a bug report via the Oracle web form. However, it is the first time that I’ve had a report accepted and published as a verified OpenJDK bug.

I’m always happy when I find a compiler bug, because it makes me feel better about bugs in my code to know that the platform developers screw up too.

]]>data MergeableSet = ... type Elem = Int empty :: (Elem, Elem) -> MergeableSet singleton :: (Elem, Elem) -> Elem -> MergeableSet size :: MergeableSet -> Int toList :: MergeableSet -> [Elem] union :: MergeableSet -> MergeableSet -> MergeableSet

Seems fairly reasonable, right? I’m going to show that **it is likely that no such data structure exists**.

First, note that some very similar data structures do in fact exist. Haskell’s Data.Set can be used to implement this interface with \(O(1)\) `singleton`

and `size`

, \(O(\log(n))\) membership testing (which is obviously much more powerful than `toList`

), and \(O(n)\) `union`

. Brodal, Makris, and Tsichlas (2006) presented a purely functional data structure that has \(O(1)\) `singleton`

, \(O(\log(n))\) membership testing, and \(O(1)\) “`join`

“, which is the same as `union`

but requires every element in the first set to be strictly less than every element in the second set.

So why is the variant above so implausible?

If a `MergeableSet`

data structure with the given time bounds exists (even without the `size`

operation), then **it is possible to find the transitive closure of an \(n\)-vertex graph in near-optimal time \(O(n^2 \log(n)^c)\)**.

The algorithms for computing the transitive closure of a graph with the current best known worst-case runtime are based on algorithms for fast matrix multiplication. In particular, transitive closure of a \(n\)-vertex graph can be computed in time \(O(n^\omega)\) where \(\omega < 2.373\) is the best known exponent for matrix multiplication. A faster algorithm for transitive closure would actually give a faster algorithm for Boolean matrix multiplication as well, as noted by Fischer and Meyer (1971).

Now, the problem of finding the transitive closures of a general graph can be reduced to the problem of finding the transitive closure of a directed acyclic graph. We can just compute the strongly connected components of the graph using any of the several linear-time algorithms, then compute the transitive closure of the resulting kernel DAG. Looping over the pairs of vertices in the original graph to move back to the starting domain takes \(O(n^2)\) time, but since the size of the output is \(n^2\) bits anyway there’s no additional asymptotic cost.

Suppose then that `MergeableSet`

exists and we want to find the transitive closure of a DAG. We can associate to each vertex the set of vertices reachable from that vertex, stored as a `MergeableSet`

. By traversing the graph in reverse topological order and using `union`

to combine the sets of all of the vertices adjacent to each vertex, we can compute `MergeableSet`

s of reachable vertices for all vertices in \(O(n^2 \log(n)^d)\) time. Then we just loop over all \(n\) vertices and obtain their lists of reachable vertices using `toList`

, which also takes \(O(n^2 \log(n)^d)\) time. A Haskell implementation of this idea (adding a slight \(O(\log(n))\) overhead by using `Data.Map`

so that I don’t have to get mutable arrays involved) looks like this:

import qualified Data.Array as Array import Data.Graph import qualified Data.Map as Map dagTransitiveClosure :: Graph -> Graph dagTransitiveClosure g = buildG (Array.bounds g) transitiveClosureEdges where rs = reachableSets g transitiveClosureEdges = [(v1, v2) | v1 <- vertices g, v2 <- toList (rs Map.! v1), v1 /= v2] type ReachableSets = Map.Map Vertex MergeableSet reachableSets :: Graph -> ReachableSets reachableSets g = foldl addVertex Map.empty $ topSort $ transposeG g where addVertex :: ReachableSets -> Vertex -> ReachableSets addVertex rs v = Map.insert v reachableSet rs where reachableSet = foldl union (singleton (Array.bounds g) v) $ map (rs Map.!) $ g Array.! v

If a `MergeableSet`

data structure with the given time bounds exists (even without the `toList`

operation), then **Cnf-Sat, the Boolean satisfiability problem for formulas in conjunctive normal form, has a \(2^{\delta n} \cdot \text{poly}(m)\) algorithm for some \(\delta < 1\)**.

Pătrașcu and Williams (2010) gave several hypotheses under which Cnf-Sat would have substantially faster algorithms than brute-force search. One of their theorems is as follows: if a certain problem 2Sat+2Clauses can be solved in time \(O((n + m)^{2 – \epsilon})\) for any \(\epsilon > 0\), then Cnf-Sat with \(n\) variables and \(m\) clauses can be solved in time \(2^{\delta n} \cdot \text{poly}(m)\) for some \(\delta < 1\). They note in passing that 2Sat+2Clauses reduces in linear time to the following problem:

Given a directed graph \(G = (V, E)\) and subsets \(S, T \subseteq V\), determine if there is some \(s \in S\) and \(t \in T\) with no path from \(s\) to \(t\).

By computing the strongly-connected components of \(G\), we can again without loss of generality assume that \(G\) is acyclic.

Now suppose that `MergeableSet`

exists. Then it is possible to solve this problem in time \(O((n + m) \cdot \log(n)^c)\) for a graph with \(n\) vertices and \(m\) edges. First, we compute the set of vertices in \(T\) reachable from each vertex, using essentially the same algorithm as the one for transitive closure from before. Then we loop over each vertex in \(S\) and use `size`

to test whether the size of its reachable set is less than \(|T|\). If we find a vertex \(s\) where this is the case, then return true; otherwise, return false. (We can also find a specific vertex \(t\) with no path from \(s\) to \(t\) by depth-first search from \(s\).)

So, to summarize, `MergeableSet`

would dramatically improve upon the known upper bounds for graph reachability problems. It’s probably too good to be true.

**Source code and documentation for rulesgen are available on GitHub**.

{-# LANGUAGE GADTs, RankNTypes #-} module Data.Foldable.Mono((*$*)) where import Data.MonoTraversable(Element, MonoFoldable(..)) -- ^ from the mono-traversable package (*$*) :: MonoFoldable mono => (forall t. Foldable t => t (Element mono) -> a) -> mono -> a f *$* o = f (Foldabilized o) data Foldabilized a where Foldabilized :: MonoFoldable mono => mono -> Foldabilized (Element mono) instance Foldable Foldabilized where foldr f z (Foldabilized o) = ofoldr f z o -- (Similar implementations for the other methods can be included -- here for efficiency.)

And then use it like this:

import Data.Foldable.Mono import qualified Data.Text.Lazy as T testText = T.pack "foo quux bar" example1 = maximum *$* testText -- ^ equals 'x' example2 = mapM_ print *$* testText -- ^ prints "'f'\n'o'\n'o'\n..."

Notice that those are the polymorphic `Foldable`

functions `maximum`

and `mapM_`

, not `Text`

-specific functions. I don’t know if this has any real-world applications, but it’s kind of neat…

**Update:** As pointed out by lfairy on Reddit, the FMList type works kind of like this.

Capsules are nice because they can form both spherical and elongated shapes in any direction. The animation above shows how Super Smash Bros. Melee uses spherical hitboxes that are “stretched” across frames into capsules to prevent fast-moving attacks from going through opponents without hitting them. (Marvel vs. Capcom 3 uses the same trick.) What really makes capsules useful is that they have a very simple mathematical description: a capsule is the set of all points less than a certain radius from a line segment. This means you can check whether two capsules intersect each other by just finding the shortest distance between the two line segments and checking whether it is less than the sum of the radii.

Calculating the distance between two line segments is a well-known problem. This StackOverflow answer gives the code to do that with floating-point arithmetic. Sometimes, though, approximating the correct answer with floating-point isn’t good enough. What if we want an exact intersection test for capsules using only integer arithmetic?

I’ll be giving code examples in Haskell. The code will be for 2-D capsules, but the 3-D case is not too different. Let’s start with some basic definitions using the vector-space package. Since we’re using integer arithmetic, all of our vectors and radii should have integer values only.

{-# LANGUAGE TypeFamilies #-} import Data.VectorSpace -- Use arbitrary-size integers to avoid overflow in later calculations. -- If you are using very small values only, this may not be necessary. type GeomInt = Integer data Vec = Vec { vecX, vecY :: !GeomInt } deriving (Show) instance AdditiveGroup Vec where zeroV = Vec 0 0 Vec x1 y1 ^+^ Vec x2 y2 = Vec (x1 + x2) (y1 + y2) negateV (Vec x y) = Vec (-x) (-y) instance VectorSpace Vec where type Scalar Vec = GeomInt s *^ Vec x y = Vec (s * x) (s * y) instance InnerSpace Vec where Vec x1 y1 <.> Vec x2 y2 = x1 * x2 + y1 * y2 -- Represents a *closed* 2-D line segment. Zero-length segments are allowed. data Segment = Segment { segmentEnd1, segmentEnd2 :: !Vec } deriving (Show) -- Represents an *open* 2-D stadium (disk-capped rectangle). It is required -- that capsuleRadius > 0. data Capsule = Capsule { capsuleSegment :: !Segment, capsuleRadius :: !GeomInt } deriving (Show)

The first step of the line segment distance computation is to test whether the line segments intersect. The test shown in the StackOverflow answer doesn’t work for our purposes because it uses floating-point division and because it treats parallel segments as never intersecting. Instead, we can use the exact test from this page.

segmentsIntersect :: Segment -> Segment -> Bool segmentsIntersect (Segment p1 q1) (Segment p2 q2) = (o1 /= o2 && o3 /= o4) || (o1 == Collinear && onSegment p1 p2 q1) || (o2 == Collinear && onSegment p1 q2 q1) || (o3 == Collinear && onSegment p2 p1 q2) || (o4 == Collinear && onSegment p2 q1 q2) where o1 = orientation p1 q1 p2 o2 = orientation p1 q1 q2 o3 = orientation p1 q1 p1 o4 = orientation p1 q1 q1 data Orientation = Collinear | Clockwise | Counterclockwise deriving (Show, Eq) orientation :: Vec -> Vec -> Vec -> Orientation orientation (Vec px py) (Vec qx qy) (Vec rx ry) = case compare val 0 of LT -> Counterclockwise EQ -> Collinear GT -> Clockwise where val = (qy - py) * (rx - qx) - (qx - px) * (ry - qy) -- onSegment p q r checks if q lies on the segment pr assuming that -- p, q, and r are collinear. onSegment :: Vec -> Vec -> Vec -> Bool onSegment (Vec px py) (Vec qx qy) (Vec rx ry) = qx <= max px rx && qx >= min px rx && qy <= max py ry && qy >= min py ry

Now here’s the tricky bit. If the segments do not intersect, we can’t simply find the distance between them to check against the radii, because the shortest distance may not be an integer. The standard trick of doing all comparisons on squared distance values to avoid square root operations doesn’t completely solve the problem either, because the closest point on one segment to the other may not even have integer coordinates.

If we were using imprecise floating-point arithmetic, the test would look like this:

capsulesIntersect :: Capsule -> Capsule -> Bool capsulesIntersect (Capsule s1@(Segment p1 q1) r1) (Capsule s2@(Segment p2 q2) r2) = segmentsIntersect s1 s2 || check p1 s2 || check q1 s2 || check p2 s1 || check q2 s1 where thresholdSq = (r1 + r2)^2 check :: Vec -> Segment -> Bool check p (Segment e1 e2) | segLenSq == 0 || t <= 0 = magnitudeSq (p ^-^ e1) < thresholdSq | t >= 1 = magnitudeSq (p ^-^ e2) < thresholdSq | otherwise = magnitudeSq (p ^-^ near) < thresholdSq where d = e2 ^-^ e1 segLenSq = magnitudeSq d near = e1 ^+^ t *^ d t = ((p ^-^ e1) <.> d) / segLenSq

Since we’re using integer arithmetic, though, the `(/)`

operator is banned. The trick to pulling this off with integers only is to scale both sides of the third inequality by `segLenSq^2`

. This will cancel the denominator so that we don’t have to do any division. We can use primed variables to denote “multiplied by a factor of `segLenSq`

“. The exact capsule intersection is then:

capsulesIntersect :: Capsule -> Capsule -> Bool capsulesIntersect (Capsule s1@(Segment p1 q1) r1) (Capsule s2@(Segment p2 q2) r2) = segmentsIntersect s1 s2 || check p1 s2 || check q1 s2 || check p2 s1 || check q2 s1 where thresholdSq = (r1 + r2)^2 check :: Vec -> Segment -> Bool check p (Segment e1 e2) | t' <= 0 = magnitudeSq (p ^-^ e1) < thresholdSq | t' >= segLenSq = magnitudeSq (p ^-^ e2) < thresholdSq | otherwise = magnitudeSq (p' ^-^ near') < thresholdSq'' where d = e2 ^-^ e1 segLenSq = magnitudeSq d thresholdSq'' = segLenSq^2 * thresholdSq p' = segLenSq *^ p near' = segLenSq *^ e1 ^+^ t' *^ d t' = (p ^-^ e1) <.> d

Notice also that we don’t have to check `segLenSq == 0`

anymore, because the `t' <= 0`

case implicitly covers that.

Yay, math!

]]>I took the sample shader code from that page and **translated it into a simple WebGL demo**. You need to have a browser that supports WebGL and the `WEBGL_depth_texture`

extension. (Chrome should work, at least.) There are two sliders that let you control the subsurface scattering effect:

- One slider controls that simulated scattering radius by adjusting the distance between samples for the blur operation. If you turn this parameter up very high, you can get wave-like artifacts near sharp transitions in depth due to the way the depth buffer is factored into the blur operation. Increasing the number of Gaussian blur samples would reduce this effect at the cost of performance.
- The other slider controls how sharp a depth difference has to be before the shader will stop blurring across that area. If you turn this up to a large value, disconnected areas of the mesh will start blurring into each other, but if you turn it down too low the scattering effect will disappear completely.