From e59ca4a3eaa53d66fb2dcd3ddbdd86d99b04b7c8 Mon Sep 17 00:00:00 2001 From: Jed Barber Date: Mon, 28 Jun 2021 00:21:26 +1200 Subject: Converted everything to XHTML 1.1 --- project/templates/about.html | 25 +-- project/templates/adapad.html | 35 ++-- project/templates/base.html | 26 +-- project/templates/base_math.html | 35 ++++ project/templates/fltkada.html | 53 +++--- project/templates/grasp.html | 29 +-- project/templates/index.html | 6 +- project/templates/integral.html | 173 +++++++++--------- project/templates/links.html | 301 ++++++++++++++++---------------- project/templates/packrat.html | 132 +++++++------- project/templates/sokoban.html | 28 +-- project/templates/steelman.html | 72 ++++---- project/templates/stvcount.html | 141 ++++++++------- project/templates/sunset.html | 31 ++-- project/templates/tags.html | 10 +- project/templates/tags/application.html | 6 +- project/templates/tags/binding.html | 6 +- project/templates/tags/compsci.html | 6 +- project/templates/tags/copyright.html | 6 +- project/templates/tags/esoteric.html | 6 +- project/templates/tags/languages.html | 6 +- project/templates/tags/legal.html | 6 +- project/templates/tags/library.html | 6 +- project/templates/tags/mathematics.html | 6 +- project/templates/tags/politics.html | 6 +- project/templates/tags/programming.html | 6 +- project/templates/tags/videogames.html | 6 +- project/templates/thue2a.html | 36 ++-- 28 files changed, 635 insertions(+), 570 deletions(-) create mode 100644 project/templates/base_math.html (limited to 'project/templates') diff --git a/project/templates/about.html b/project/templates/about.html index 2648b2e..c2c744c 100644 --- a/project/templates/about.html +++ b/project/templates/about.html @@ -8,7 +8,7 @@ {% block style %} - + {% endblock %} @@ -17,17 +17,17 @@

About Me

-

Have you ever looked at the way things are done and just wanted to make it better? I feel -like that all the time. It's not even a matter of introducing something new. The field of computing is -littered with ideas that are well researched, have working prototypes, yet ended up abandoned decades -ago for no apparent reason. I expect other fields are probably similar. One of my goals in life is to -reduce this gap between what's known and what's actually used. Mainly in software development, because -that's what I'm most familiar with.

+

Have you ever looked at the way things are done and just wanted to make it better? I +feel like that all the time. It's not even a matter of introducing something new. The field of +computing is littered with ideas that are well researched, have working prototypes, yet ended up +abandoned decades ago for no apparent reason. I expect other fields are probably similar. One of my +goals in life is to reduce this gap between what's known and what's actually used. Mainly in +software development, because that's what I'm most familiar with.

-

So this website is primarily about hosting and talking about programming projects. Expect a bunch of -smaller less useful pieces before I get to the good stuff. But also expect electronics, civics, foreign -languages... basically anything catching my interest about which I can say something data driven or -otherwise interesting.

+

So this website is primarily about hosting and talking about programming projects. Expect a bunch +of smaller less useful pieces before I get to the good stuff. But also expect electronics, civics, +foreign languages... basically anything catching my interest about which I can say something data +driven or otherwise interesting.

Qualifications

@@ -39,7 +39,8 @@ otherwise interesting.

In 2012 I almost completed an extra honours year in computer science, before finding out that I'm -really not cut out for research. My mindset is too focused on solving problems. Practical problems.

+really not cut out for research. My mindset is too focused on solving problems. Practical problems. +

Contact

diff --git a/project/templates/adapad.html b/project/templates/adapad.html index 299db68..6e0501f 100644 --- a/project/templates/adapad.html +++ b/project/templates/adapad.html @@ -16,29 +16,30 @@
8/5/2017

The Ada binding for FLTK has now been moved to its own repository. -Installing it is required to build and use Adapad, naturally. Both repositories are set up to use the GNAT -Project Manager build tools to handle all that, with any further specific details in each project's readme.

+Installing it is required to build and use Adapad, naturally. Both repositories are set up to use +the GNAT Project Manager build tools to handle all that, with any further specific details in each +project's readme.

2/1/2017
-

I have a soft spot for the Ada programming language. +

I have a soft spot for the Ada programming language. Strong typing, built in concurrency, readable syntax, systems support, real-time support, a general -culture of correctness and emphasising reliability... what's not to like? I also have a bit of an interest -in FLTK, being one of the more prominent tiny -graphics toolkits around. Adapad is a notepad clone born as a side project from efforts to create an Ada -binding for FLTK.

+culture of correctness and emphasising reliability... what's not to like? I also have a bit of an +interest in FLTK, being one of the more +prominent tiny graphics toolkits around. Adapad is a notepad clone born as a side project from +efforts to create an Ada binding for FLTK.

-
+
A screenshot of Adapad -
Adapad in action
-
+ width="862" /> +
Adapad in action
+ -

It was modeled after Leafpad, and -the feature list is similar, currently comprising:

+

It was modeled after Leafpad, +and the feature list is similar, currently comprising:

diff --git a/project/templates/integral.html b/project/templates/integral.html index 0e5307b..1c9d175 100644 --- a/project/templates/integral.html +++ b/project/templates/integral.html @@ -1,5 +1,5 @@ -{% extends "base.html" %} +{% extends "base_math.html" %} @@ -8,7 +8,7 @@ {% block style %} - + {% endblock %} @@ -21,25 +21,26 @@
29/12/2018
-

A definite integral can be represented on the xy-plane as the signed area -bounded by the curve of the function f(x), the x-axis, and the limits of -integration a and b. But it's not immediately clear how this definition applies -for complex valued functions.

+

A definite integral can be represented on the xy-plane as the signed area bounded by the curve of +the function f(x), the x-axis, and the limits of integration a and b. But it's not immediately clear +how this definition applies for complex valued functions.

Consider the following example:

- - + + 0 1 - + - + + ( -1 - + ) + x dx @@ -47,70 +48,68 @@ for complex valued functions.

-

If the function is graphed on the xy-plane, the real valued outputs are sparse. -Yet an elementary antiderivative exists and the definite integral is well defined.

+

If the function is graphed on the xy-plane, the real valued outputs are sparse. Yet an elementary +antiderivative exists and the definite integral is well defined.

-
+
Real values only plot of minus one raised to the x power -
Figure 1 - Real values only
-
- -

In order to plot a meaningful graph that can be used to potentially calculate -the integral as a signed area, some cues are taken from Philip Lloyd's work on -Phantom Graphs. -In that work, an additional z-axis is used to extend the x-axis into a complex -xz-plane, allowing complex inputs to be graphed. For the function considered -here, the z-axis is instead used to extend the y-axis into a complex yz-plane -to allow graphing of complex outputs instead.

+ width="520" /> +
Figure 1 - Real values only
+ + +

In order to plot a meaningful graph that can be used to potentially calculate the integral as a +signed area, some cues are taken from Philip Lloyd's work on +Phantom Graphs. In that work, an +additional z-axis is used to extend the x-axis into a complex xz-plane, allowing complex inputs to +be graphed. For the function considered here, the z-axis is instead used to extend the y-axis into a +complex yz-plane to allow graphing of complex outputs instead.

Upon doing so, the following helical graph is obtained:

-
+
Complete three dimensional graph of all values of minux one raised to the x power -
Figure 2 - Full graph
-
+ width="520" /> +
Figure 2 - Full graph
+ -

The curve is continuous and spirals around the x-axis, intersecting with the -real xy-plane at the points plotted in the initial graph of Figure 1. However -it is still not clear how to represent the area under the curve.

+

The curve is continuous and spirals around the x-axis, intersecting with the real xy-plane at the +points plotted in the initial graph of Figure 1. However it is still not clear how to represent the +area under the curve.

-

Observing that complex numbers in cartesian form are composed of a real -part and an imaginary part, it is possible to decompose the function -into real and imaginary components. These are easy to obtain by rotating the -graph above to view the real and imaginary parts as flat planes.

+

Observing that complex numbers in cartesian form are composed of a real part and an imaginary +part, it is possible to decompose the function into real and imaginary components. These are easy to +obtain by rotating the graph above to view the real and imaginary parts as flat planes.

-
+
Graph of the real component of minus one raised to the power of x -
Figure 3 - Real component
-
+ width="520" /> +
Figure 3 - Real component
+
-
+
Graph of the imaginary component of minus one raised to the power of x -
Figure 4 - Imaginary component
-
+ width="520" /> +
Figure 4 - Imaginary component
+
-

From this it can be seen that the function is a combination of a real valued -cosine and an imaginary valued sine. With the limits of integration under -consideration, the real values disappear and we are left with the following:

+

From this it can be seen that the function is a combination of a real valued cosine and an +imaginary valued sine. With the limits of integration under consideration, the real values disappear +and we are left with the following:

@@ -118,15 +117,17 @@ consideration, the real values disappear and we are left with the following:

@@ -217,9 +218,9 @@ consideration, the real values disappear and we are left with the following:

2 - i + i - π + π @@ -228,14 +229,14 @@ consideration, the real values disappear and we are left with the following:

- - + + 0 1 - + - + + ( -1 - + ) + x dx @@ -140,19 +141,19 @@ consideration, the real values disappear and we are left with the following:

i - - + + 0 1 - + sin - - - π - - x - - + + ( + π + + x + ) + dx @@ -166,19 +167,19 @@ consideration, the real values disappear and we are left with the following:

- i - π + π cos - - - π - - x - - + + ( + π + + x + ) + 0 @@ -196,15 +197,15 @@ consideration, the real values disappear and we are left with the following:

- i - π + π - - - -1 - - - 1 - - + + ( + -1 + - + 1 + ) +
-

This agrees with the answer obtained by ordinary evaluation of the integral -without considering the graph, so the informal area under the curve definition -still works. Considering the area under the curve using polar coordinates also -works, but requires evaluating a less than pleasant infinite sum and so won't -be considered here.

+

This agrees with the answer obtained by ordinary evaluation of the integral without considering +the graph, so the informal area under the curve definition still works. Considering the area under +the curve using polar coordinates also works, but requires evaluating a less than pleasant infinite +sum and so won't be considered here.

The next interesting question is how this relates to the surface area of a -right helicoid.

+ +right helicoid.

{% endblock %} diff --git a/project/templates/links.html b/project/templates/links.html index 54a0678..c7640f5 100644 --- a/project/templates/links.html +++ b/project/templates/links.html @@ -8,7 +8,7 @@ {% block style %} - + {% endblock %} @@ -17,193 +17,194 @@

Links

-

These are some of the websites that found their way into my bookmarks over the years. Quite a few years, -since some of these require the Wayback Machine to view now. Posted because if I find these pages interesting, -chances are someone else might too. Note that this list is not anywhere near exhaustive.
-
-Please do not send me suggestions for websites to put here. While I am sure you mean well, I much prefer to -discover these things for myself.

+

These are some of the websites that found their way into my bookmarks over the years. Quite a +few years, since some of these require the Wayback Machine to view now. Posted because if I find +these pages interesting, chances are someone else might too. Note that this list is not anywhere +near exhaustive.
+
+Please do not send me suggestions for websites to put here. While I am sure you mean well, I much +prefer to discover these things for myself.

Blogs
Books
Computer Hardware
DIY
Electronics
Fitness and Cycling
Food
Foreign Languages
Gaming
General Computing
@@ -212,175 +213,175 @@ discover these things for myself.

Math and Logic
Operating Systems
Other
Politics and Law
Programming
Science
Travel and Transportation
diff --git a/project/templates/packrat.html b/project/templates/packrat.html index 117c69a..17dbfc0 100644 --- a/project/templates/packrat.html +++ b/project/templates/packrat.html @@ -12,7 +12,7 @@

Packrat Parser Combinator Library

-

Git repository: Link
+

Git repository: Link
Paper this was based on: Link

2/2/2021
@@ -20,17 +20,16 @@ Paper this was based on: recursive descent parsing. They are higher order -functions that can be combined in modular ways to create a desired parser.

+

Parser combinators are what you end up with when you start factoring out common pieces of +functionality from recursive descent parsing. They are higher order functions that can be combined in modular ways to create a desired parser.

-

However they also inherit the drawbacks of recursive descent parsing, and in particular recursive descent -parsing with backtracking. If the grammar that the parser is designed to accept contains left recursion then -the parser will loop infinitely. If the grammar is ambiguous then only one result will be obtained. And any result -may require exponential time and space to calculate.

+

However they also inherit the drawbacks of recursive descent parsing, and in particular recursive +descent parsing with backtracking. If the grammar that the parser is designed to accept contains +left recursion then the parser will loop infinitely. If the grammar is ambiguous then only one +result will be obtained. And any result may require exponential time and space to calculate.

-

This library, based on the paper linked at the top, solves all those problems and a few bits more. As an -example, the following grammar portion:

+

This library, based on the paper linked at the top, solves all those problems and a few bits +more. As an example, the following grammar portion:

@@ -65,74 +64,81 @@ function Expression is new Stamp (Expression_Label, Expr_Choice);
 
 
-

Most of the verbosity is caused by the need to individually instantiate each combinator, as generics are -used to serve the same purpose as higher order functions. Some bits are also omitted, such as the label -enumeration and the actual setting of the redirectors. But the above should provide a good general -impression.

+

Most of the verbosity is caused by the need to individually instantiate each combinator, as +generics are used to serve the same purpose as higher order functions. Some bits are also omitted, +such as the label enumeration and the actual setting of the redirectors. But the above should +provide a good general impression.

Features

A list of features that this library provides includes:

    -
  • Higher order combinator functions in Ada, a language that does not support functional programming
  • +
  • Higher order combinator functions in Ada, a language that does not support functional + programming
  • Both parser combinators and simpler lexer combinators are available
  • Input can be any array, whether that is text strings or otherwise
  • Left recursive grammars are parsed correctly with no infinite loops
  • -
  • Ambiguity is handled by incorporating all possible valid options into the resulting parse tree
  • -
  • Parsing and lexing can both be done piecewise, providing input in successive parts instead of all at once
  • -
  • Error messages are generated when applicable that note what would have been needed and where for a successful parse
  • +
  • Ambiguity is handled by incorporating all possible valid options into the resulting parse tree +
  • +
  • Parsing and lexing can both be done piecewise, providing input in successive parts instead of + all at once
  • +
  • Error messages are generated when applicable that note what would have been needed and where + for a successful parse
  • All of the above is accomplished in polynomial worst case time and space complexity

More thorough documentation is provided in the /doc directory.

-

The name of the library comes from packrat parsing -which is a parsing algorithm that avoids exponential time complexity by memoizing all intermediate results. As that -is what this library does, both so as to reduce the time complexity and to enable piecewise parsing, the name -seemed appropriate.

+

The name of the library comes from packrat parsing which is a parsing algorithm that avoids exponential time complexity by memoizing all intermediate results. As that is what this library does, both so as to reduce the time complexity +and to enable piecewise parsing, the name seemed appropriate.

Left Recursion
-

Direct left recursion, meaning a grammar non-terminal that immediately recurses to itself on the left as in -the Expression or Term used above, is fairly easy to handle. A counter is used to keep track -of how many times a particular combinator has been applied to the input at a particular position, and when that -counter exceeds the number of unconsumed input tokens plus one the result is curtailed. This is explained on -pages 7 and 8 of the paper.

- -

The really thorny issue that caused the most problems with this library is indirect left recursion. This is -when a non-terminal recurses to itself on the left only after first evaluating to one or more other non-terminals. -Curtailment in these circumstances can easily cause those other non-terminals to also be curtailed, and reusing -the results for those other non-terminals may be incorrect. This issue along with a proposed solution is -explained on page 9 of the paper. However that solution was not as clear as would be preferred. So some minor -rephrasing and reworking was needed.

- -

Bringing this problem back to the start: What are we really doing when we curtail a result due to left -recursion? It is not a matter of cutting off branches in a parse tree. We are identifying conditions where the -parse result of a particular non-terminal can be calculated without further recursion. The word "curtailment" is -somewhat misleading in this regard. Once this reframing is done then a clearer view immediately follows.

- -

What is the condition? Exactly as described above for direct left recursion. Through comparing recursion counts -with the amount of unconsumed input we determine that a result of no successful parse can be calculated, and that -the result is valid for reuse for any deeper recursion of the same combinator at that input position.

+

Direct left recursion, meaning a grammar non-terminal that immediately recurses to itself on the +left as in the Expression or Term used above, is fairly easy to handle. A counter +is used to keep track of how many times a particular combinator has been applied to the input at a +particular position, and when that counter exceeds the number of unconsumed input tokens plus one +the result is curtailed. This is explained on pages 7 and 8 of the paper.

+ +

The really thorny issue that caused the most problems with this library is indirect left +recursion. This is when a non-terminal recurses to itself on the left only after first evaluating to +one or more other non-terminals. Curtailment in these circumstances can easily cause those other +non-terminals to also be curtailed, and reusing the results for those other non-terminals may be +incorrect. This issue along with a proposed solution is explained on page 9 of the paper. However +that solution was not as clear as would be preferred. So some minor rephrasing and reworking was +needed.

+ +

Bringing this problem back to the start: What are we really doing when we curtail a result due to +left recursion? It is not a matter of cutting off branches in a parse tree. We are identifying +conditions where the parse result of a particular non-terminal can be calculated without further +recursion. The word "curtailment" is somewhat misleading in this regard. Once this reframing is done +then a clearer view immediately follows.

+ +

What is the condition? Exactly as described above for direct left recursion. Through comparing +recursion counts with the amount of unconsumed input we determine that a result of no successful +parse can be calculated, and that the result is valid for reuse for any deeper recursion of the same combinator at that input position.

From that can be derived:

    -
  • When merging two results that have different left recursion count conditions for the same non-terminal, - the larger count should be used
  • -
  • Conditions of subresults should also be made part of any result that includes those subresults
  • -
  • Any memoized result is only reusable if all the recursion count conditions of the stored result are less - than or equal to the recursion counts for the current input position
  • +
  • When merging two results that have different left recursion count conditions for the same + non-terminal, the larger count should be used
  • +
  • Conditions of subresults should also be made part of any result that includes those subresults +
  • +
  • Any memoized result is only reusable if all the recursion count conditions of the stored + result are less than or equal to the recursion counts for the current input position
-

So far the above list just covers what is in the paper. But there is a little more that can be inferred:

+

So far the above list just covers what is in the paper. But there is a little more that can be +inferred:

    -
  • If a result is not reusable and a new result is calculated, then the recursion count conditions of the - old result should be updated to the recursion counts at the current position and applied to the new result
  • -
  • When the recursion count of a condition applied to a result plus the number of unconsumed input tokens after - the result is less than the number of input tokens available at the beginning of the result, then that - condition can be omitted from the result
  • +
  • If a result is not reusable and a new result is calculated, then the recursion count + conditions of the old result should be updated to the recursion counts at the current position and + applied to the new result
  • +
  • When the recursion count of a condition applied to a result plus the number of unconsumed + input tokens after the result is less than the number of input tokens available at the beginning + of the result, then that condition can be omitted from the result

These two details should constitute a minor efficiency improvement.

@@ -140,17 +146,19 @@ the result is valid for reuse for any deeper recursion of the same combinator at
Further Work
-

While the polynomial complexity of this library has been experimentally confirmed, no effort has yet been -made to prove that it is actually polynomial in the same way that the parser combinators in the paper are. It -is possible that due to the changes involved with using a non-functional language and enabling piecewise -parsing that some subtle complexity difference may have arisen.

+

While the polynomial complexity of this library has been experimentally confirmed, no effort has +yet been made to prove that it is actually polynomial in the same way that the parser combinators in +the paper are. It is possible that due to the changes involved with using a non-functional language +and enabling piecewise parsing that some subtle complexity difference may have arisen.

-

Likewise, the piecewise parsing has been unit tested to a degree but no formal proof that it is correct has -been done.

+

Likewise, the piecewise parsing has been unit tested to a degree but no formal proof that it is +correct has been done.

-

Ideas like being able to modify and resume an erroneous parsing attempt would also be interesting to explore.

+

Ideas like being able to modify and resume an erroneous parsing attempt would also be interesting +to explore.

-

Finally, the plan is to actually use this library for something significant at some point in the future.

+

Finally, the plan is to actually use this library for something significant at some point in the +future.

{% endblock %} diff --git a/project/templates/sokoban.html b/project/templates/sokoban.html index ee2a470..bf676d1 100644 --- a/project/templates/sokoban.html +++ b/project/templates/sokoban.html @@ -16,24 +16,24 @@
8/8/2017

Back when I was studying computer science at university, there was an assignment involving -Sokoban. We were tasked with completing -a half written implementation in Java and Swing. This is not that implementation. It is, however, based -on it. Recently while going over old notes I found the assignment. The submission I had originally -made is lost to time, but it seemed like a nice quick diversion to get some more use out of the -FLTK Ada binding.

+Sokoban. We were tasked with +completing a half written implementation in Java and Swing. This is not that implementation. It is, +however, based on it. Recently while going over old notes I found the assignment. The submission I +had originally made is lost to time, but it seemed like a nice quick diversion to get some more use +out of the FLTK Ada binding.

-
+
The first level -
The first level
-
- -

This is a vanilla version, so the only mechanic is pushing blocks to specific goal tiles. Controls -are simple enough that instructions can be left permanently written at the bottom of the window. An -A* Search algorithm is used -for the mouse control.

+ width="764" /> +
The first level
+
+ +

This is a vanilla version, so the only mechanic is pushing blocks to specific goal tiles. +Controls are simple enough that instructions can be left permanently written at the bottom of the +window. An A* Search +algorithm is used for the mouse control.

{% endblock %} diff --git a/project/templates/steelman.html b/project/templates/steelman.html index bb61548..6fb543b 100644 --- a/project/templates/steelman.html +++ b/project/templates/steelman.html @@ -8,7 +8,7 @@ {% block style %} - + {% endblock %} @@ -28,7 +28,7 @@ more refined versions of the requirements from Strawman through to Ironman, this programming language, possibly the gold standard language for writing safe and secure software, was designed to comply with Steelman.

-

In 1996 David A. Wheeler wrote a paper +

In 1996 David A. Wheeler wrote a paper that compared Ada, C, C++, and Java against the Steelman. This served to highlight the strengths and weaknesses of those languages, areas that could be improved, and a scant few requirement points that perhaps aren't even applicable anymore. Since then several more programming languages capable of systems work have been created, so it's time for an @@ -37,70 +37,70 @@ update. More datapoints! Hence, this article will conduct a similar comparison,

The Languages
-

D was created originally as a reworking of C++ in 2000-2001. It +

D was created originally as a reworking of C++ in 2000-2001. It serves to represent a progression of the C language family, adding features including contracts, optional garbage collection, and a standard threading model.

-

Parasail is a research language +

Parasail is a research language created in 2009 by AdaCore, the main vendor of Ada compiler tooling today. The language is designed with implicit -parallelism throughout, simplifying and adding static checking +parallelism throughout, simplifying and adding static checking to eliminate as many sources of errors as possible. It represents a possible future direction for Ada derived languages.

-

Pascal, like C, predates +

Pascal, like C, predates the Steelman requirements and so they cannot have had any influence at all on the language. It was designed for formal -specification and teaching algorithms. +specification and teaching algorithms. Later dialects were used to develop several high profile software projects, including Skype, Photoshop, and the original Mac OS. It is useful to consider as a precusor of Ada, sharing many points of functionality and style.

-

Rust is the newest language here, created in 2010. It +

Rust is the newest language here, created in 2010. It is an odd mix of C and ML influence, placing more emphasis on the functional paradigm than other systems languages. Its main claim to fame is adding another method of heap memory safety via -affine typing.

+affine typing.

-
+
Logo for the D programming language -
D
-
+ width="164" /> +
D
+
-
+
Logo for the Parasail programming language -
Parasail
-
+ width="149" /> +
Parasail
+
-
+
A picture of Blaise Pascal to stand in as a logo for the Pascal programming language -
Pascal*
-
+ width="142" /> +
Pascal*
+
-
+
Logo for the Rust programming language -
Rust
-
+ width="144" /> +
Rust
+
-

* Pascal does not have an official logo, so a picture of Blaise Pascal, +

* Pascal does not have an official logo, so a picture of Blaise Pascal, in whose honour the language is named, will have to do.

@@ -118,17 +118,17 @@ that, effort has been made to keep the evaluation as similar as practical to the

The defining documents used for each of these languages are as follows:

@@ -224,8 +224,8 @@ considered meet that requirement on the right. Some explanatory notes are includ turned out to be educated guesses. Fairness was the goal, but nonetheless reader discretion is advised.

-
- +
+
@@ -1502,7 +1502,7 @@ turned out to be educated guesses. Fairness was the goal, but nonetheless reader -   +   @@ -2320,7 +2320,7 @@ turned out to be educated guesses. Fairness was the goal, but nonetheless reader
-
+
{% endblock %} diff --git a/project/templates/stvcount.html b/project/templates/stvcount.html index 02b47ec..b19b4d7 100644 --- a/project/templates/stvcount.html +++ b/project/templates/stvcount.html @@ -8,7 +8,7 @@ {% block style %} - + {% endblock %} @@ -22,26 +22,27 @@
19/2/2017

To give an incredibly brief summary of Australia's political system, both the Federal Parliament and most of the State +class="external">Australia's political system, both the Federal Parliament and most of the State Parliaments are bicameral. The lower houses are generally elected by Instant Runoff, while the upper -houses generally have half elections using Single Transferable Vote. There are exceptions and a whole -lot of differing details, but that's the overall pattern.

+houses generally have half elections using Single Transferable Vote. There are exceptions and a +whole lot of differing details, but that's the overall pattern.

In 2016, however, the Federal Parliament underwent a Double Dissolution, causing the entirety of both houses to go to an election. This had the outcome of 20 out of 76 seats going to third parties -in the upper house, a record number. Even more than the 18 there were prior. As the entire purpose of -a Double Dissolution is to break deadlocks in parliament, to have the outcome go in the +in the upper house, a record number. Even more than the 18 there were prior. As the entire purpose +of a Double Dissolution is to break deadlocks in parliament, to have the outcome go in the complete opposite direction probably caused some dismay from Malcolm Turnbull +class="external">complete opposite direction probably caused some dismay from Malcolm Turnbull and his Liberal/National government.

-

This raises the question: Would they have been better off had a normal election happened instead?

+

This raises the question: Would they have been better off had a normal election happened instead? +

To calculate the likely outcome, the ballot preference data is needed. That's the easy part, as the Australian Electoral Commission makes that available -here -in the 'Formal preferences' section. Then, a program is needed to execute the STV algorithm, which is -as follows:

+here +in the 'Formal preferences' section. Then, a program is needed to execute the STV algorithm, which +is as follows:

  1. Set the quota of votes required for a candidate to win.
  2. @@ -55,48 +56,51 @@ as follows:

  3. Repeat steps 3-5 until all seats are filled.
-

Seems simple enough, right? Except not really. There is a surprising amount of complexity in there, and most -of it is to do with how to transfer votes around. So, in addition, there are the specifics for the version -used for the Australian Senate:

+

Seems simple enough, right? Except not really. There is a surprising amount of complexity in +there, and most of it is to do with how to transfer votes around. So, in addition, there are the +specifics for the version used for the Australian Senate:

My implementation also includes bulk exclusions using applied breakpoints in order to increase speed slightly and minimise -superfluous logging.

+class="external">bulk exclusions using applied breakpoints in order to increase speed slightly +and minimise superfluous logging.

-

At this point I'm fairly sure my program provides an accurate count. However, my numbers still differ -slightly from the ones provided by the AEC's official distribution of preferences. Investigations into the -exact cause are ongoing.

+

At this point I'm fairly sure my program provides an accurate count. However, my numbers still +differ slightly from the ones provided by the AEC's official distribution of preferences. +Investigations into the exact cause are ongoing.

Results

-

Calculations were done for each state using the formal preference data with vacancies set to 6 instead of 12, -and the results were added to the Senators elected in 2013 to find the probable outcome. The results for -ACT and NT were taken as-is, because the few Senators elected from the territories are not part of the half -election cadence anyway.

+

Calculations were done for each state using the formal preference data with vacancies set to 6 +instead of 12, and the results were added to the Senators elected in 2013 to find the probable +outcome. The results for ACT and NT were taken as-is, because the few Senators elected from the +territories are not part of the half election cadence anyway.

-

Computational resources required varied from approximately 50 seconds using 46MB of memory for Tasmania, to -nearly 30 minutes using 1452MB memory for NSW. The vast majority of that time was spent parsing preference data, -and the program is single threaded, so there is still room for improvement. All counts were run on a Core 2 Quad -Q9500.

+

Computational resources required varied from approximately 50 seconds using 46MB of memory for +Tasmania, to nearly 30 minutes using 1452MB memory for NSW. The vast majority of that time was spent +parsing preference data, and the program is single threaded, so there is still room for improvement. +All counts were run on a Core 2 Quad Q9500.

@@ -265,30 +269,35 @@ Q9500.

Probable non-DD results by state
-

* These three parties were all part of the Palmer United Party at the 2013/2014 election, but split up mid term.

- -

Surprisingly, these projected results still have 20 out of 76 seats held by third party candidates, despite -the half election putting them at a disadvantage. The number of third party groups the Liberal/Nationals have to -negotiate with to pass legislation (assuming Labor and Greens attempt to block) equally remains unchanged.

- -

The Greens manage to do slightly worse, even though their usual position of winning the 5th or 6th seat in most states -often allows them to obtain more representation than their primary vote would otherwise support. This can't even be -attributed to a bad 2013 result, as their primary vote both then and in 2016 was nearly identical.

- -

One Nation's much reduced number of seats can be attributed to the inherent geographic bias that any system involving -electing candidates across many independent divisions has. If like-minded voters are all in one place, they -receive representation, but when the same number of voters are spread out, they get nothing. When this effect -is intentionally exploited it's called gerrymandering, but here it's merely an artifact of electing Senators from each -state separately. One Nation's support is strongest in Queensland but is relatively diffuse. Any claims of Pauline -Hanson being one of the most powerful politicians in Australia are thus overblown.

- -

The Xenophon Group, by contrast, has the vast majority of their support concentrated in South Australia. So the result -for them remains unchanged.

- -

The most noteworthy outcomes for the question though, are that the Liberal/Nationals would have obtained more seats, -and Labor would have been in a more difficult position to block the passage of legislation. Meaning that yes, the -Liberal/National government would definitely have been better off with a normal election.

+

* These three parties were all part of the Palmer United Party at the 2013/2014 election, but +split up mid term.

+ +

Surprisingly, these projected results still have 20 out of 76 seats held by third party +candidates, despite the half election putting them at a disadvantage. The number of third party +groups the Liberal/Nationals have to negotiate with to pass legislation (assuming Labor and Greens +attempt to block) equally remains unchanged.

+ +

The Greens manage to do slightly worse, even though their usual position of winning the 5th or +6th seat in most states often allows them to obtain more representation than their primary vote +would otherwise support. This can't even be attributed to a bad 2013 result, as their primary vote +both then and in 2016 was nearly identical.

+ +

One Nation's much reduced number of seats can be attributed to the inherent geographic bias that +any system involving electing candidates across many independent divisions has. If like-minded +voters are all in one place, they receive representation, but when the same number of voters are +spread out, they get nothing. When this effect is intentionally exploited it's called +gerrymandering, but here it's merely an artifact of electing Senators from each state separately. +One Nation's support is strongest in Queensland but is relatively diffuse. Any claims of Pauline +Hanson being one of the most powerful politicians in Australia are thus +overblown.

+ +

The Xenophon Group, by contrast, has the vast majority of their support concentrated in South +Australia. So the result for them remains unchanged.

+ +

The most noteworthy outcomes for the question though, are that the Liberal/Nationals would have +obtained more seats, and Labor would have been in a more difficult position to block the passage of +legislation. Meaning that yes, the Liberal/National government would definitely have been better off +with a normal election.

Nice job screwing over your own party, Malcolm.

diff --git a/project/templates/sunset.html b/project/templates/sunset.html index d670e64..9354e63 100644 --- a/project/templates/sunset.html +++ b/project/templates/sunset.html @@ -16,19 +16,18 @@
29/6/2017

Software licenses bother me. As a general rule I prefer to make my projects open source, -and for that purpose something like the Unlicense -is often sufficient. But if I don't want to put my work in the public domain immediately, then -I have to make use of a copyleft license. And all -of the ones currently available are both incredibly, unnecessarily verbose, and fail to -address the primary failing of modern copyright law, which is the unreasonably long term -lengths.

- -

So after a considerable amount of thought, I've written my own. (I can hear those with -legal knowledge wailing and gnashing their teeth already.) Care has been taken to mimic the -phrasing used in popular existing licenses where possible. I've also kept it as simple and as -straightforward as I can make it. So hopefully there are no loopholes and it's exactly as it -appears: a simple weak copyleft license which places older parts of a work under a public -domain disclaimer in a reasonable timeframe.

+and for that purpose something like the Unlicense +is often sufficient. But if I don't want to put my work in the public domain immediately, then I +have to make use of a copyleft license. And +all of the ones currently available are both incredibly, unnecessarily verbose, and fail to address +the primary failing of modern copyright law, which is the unreasonably long term lengths.

+ +

So after a considerable amount of thought, I've written my own. (I can hear those with legal +knowledge wailing and gnashing their teeth already.) Care has been taken to mimic the phrasing used +in popular existing licenses where possible. I've also kept it as simple and as straightforward as I +can make it. So hopefully there are no loopholes and it's exactly as it appears: a simple weak +copyleft license which places older parts of a work under a public domain disclaimer in a reasonable +timeframe.

The full text is as follows:

@@ -56,9 +55,9 @@ may do whatever you want with it, regardless of all other clauses.
-

The git repository also contains an accompanying rationale and a simple logo I threw together. -In the future, all my projects will either use this license or the Unlicense. Works I've -already created will be relicensed as appropriate.

+

The git repository also contains an accompanying rationale and a simple logo I threw together. In +the future, all my projects will either use this license or the Unlicense. Works I've already +created will be relicensed as appropriate.

{% endblock %} diff --git a/project/templates/tags.html b/project/templates/tags.html index a3edfcc..349f575 100644 --- a/project/templates/tags.html +++ b/project/templates/tags.html @@ -8,7 +8,7 @@ {% block style %} - + {% endblock %} @@ -19,11 +19,13 @@