Jonkman Microblog
  • Login
Show Navigation
  • Public

    • Public
    • Network
    • Groups
    • Popular
    • People

Mathematic (math) group

  1. Jens Kubieziel (qbi@quitter.se)'s status on Saturday, 24-Mar-2018 20:08:46 EDT Jens Kubieziel Jens Kubieziel
    • Mathematic
    Falls jemand einen schönen Text zu Kategorientheorie sucht:
    https://arxiv.org/pdf/1803.05316.pdf
    !math 
    In conversation Saturday, 24-Mar-2018 20:08:46 EDT from quitter.se permalink
  2. Jens Kubieziel (qbi@quitter.se)'s status on Tuesday, 02-Jan-2018 07:43:18 EST Jens Kubieziel Jens Kubieziel
    • Mathematic
    Welcher Mathecrack kann das beantworten:
    https://twitter.com/neuwirthe/status/948129765352931328
    !math 
    In conversation Tuesday, 02-Jan-2018 07:43:18 EST from quitter.se permalink
  3. Jens Kubieziel (qbi@quitter.se)'s status on Wednesday, 13-Dec-2017 08:10:03 EST Jens Kubieziel Jens Kubieziel
    • Mathematic
    Schöne Beschreibung, wie man einen Mathetext liest:
    http://www.people.vcu.edu/~dcranston/490/handouts/math-read.html
    !math 
    In conversation Wednesday, 13-Dec-2017 08:10:03 EST from quitter.se permalink
  4. Jens Kubieziel (qbi@quitter.se)'s status on Thursday, 30-Nov-2017 15:41:45 EST Jens Kubieziel Jens Kubieziel
    • Mathematic
    Ein Spammer entschuldigt sich für den Spam und bietet bis zu 31,4159% Rabatt. #pi !math 
    In conversation Thursday, 30-Nov-2017 15:41:45 EST from quitter.se permalink
  5. arunisaac (arunisaac@social.systemreboot.net)'s status on Wednesday, 01-Nov-2017 13:07:03 EDT arunisaac arunisaac
    • Science (News, Articles, …)
    • Mathematic
    "posits" to replace floating point arithmetic https://www.youtube.com/watch?v=N05yYbUZMSQ !science !math
    In conversation Wednesday, 01-Nov-2017 13:07:03 EDT from social.systemreboot.net permalink

    Attachments

    1. Beating Floats at Their Own Game
      By RichReport from YouTube
  6. Jens Kubieziel (qbi@quitter.se)'s status on Wednesday, 09-Aug-2017 12:43:04 EDT Jens Kubieziel Jens Kubieziel
    • Mathematic
    Ohne ernsthaftes Mathewissen ist die Aufgabe vermutlich nicht lösbar.
    !math

    https://twitter.com/johncarlosbaez/status/895259039243816961
    In conversation Wednesday, 09-Aug-2017 12:43:04 EDT from quitter.se permalink
  7. arunisaac (arunisaac@social.systemreboot.net)'s status on Monday, 17-Jul-2017 06:56:19 EDT arunisaac arunisaac
    • Science (News, Articles, …)
    • Mathematic
    https://xkcd.com/1861/ https://imgs.xkcd.com/comics/quantum.png !math !science
    In conversation Monday, 17-Jul-2017 06:56:19 EDT from social.systemreboot.net permalink
  8. Jens Kubieziel (qbi@quitter.se)'s status on Tuesday, 07-Mar-2017 16:15:43 EST Jens Kubieziel Jens Kubieziel
    • Mathematic
    • krautspace
    Es ist immer interessant in den @krautspace zu kommen. TIL: Trunkierbare Primzahlen
    !math 
    In conversation Tuesday, 07-Mar-2017 16:15:43 EST from quitter.se permalink
  9. arunisaac (arunisaac@social.systemreboot.net)'s status on Tuesday, 07-Feb-2017 11:44:21 EST arunisaac arunisaac
    • Mathematic
    http://abstrusegoose.com/230 http://abstrusegoose.com/strips/it_is_obvious.png !math
    In conversation Tuesday, 07-Feb-2017 11:44:21 EST from social.systemreboot.net permalink
  10. arunisaac (arunisaac@social.systemreboot.net)'s status on Saturday, 04-Feb-2017 13:06:07 EST arunisaac arunisaac
    • Mathematic
    https://www.youtube.com/watch?v=aAJkLh76QnM !math
    In conversation Saturday, 04-Feb-2017 13:06:07 EST from social.systemreboot.net permalink

    Attachments

    1. Chaos | Chapter 7 : Strange Attractors - The butterfly effect
      By It's so blatant from YouTube
  11. arunisaac (arunisaac@social.systemreboot.net)'s status on Monday, 21-Nov-2016 12:19:47 EST arunisaac arunisaac
    • Mathematic
    !math https://www.ted.com/talks/robert_lang_folds_way_new_origami
    In conversation Monday, 21-Nov-2016 12:19:47 EST from social.systemreboot.net permalink

    Attachments

    1. Robert Lang: The math and magic of origami
      By Robert Lang from TED
  12. arunisaac (arunisaac@social.systemreboot.net)'s status on Wednesday, 31-Aug-2016 02:08:28 EDT arunisaac arunisaac
    • Mathematic
    • arunisaac
    !math http://imgs.xkcd.com/comics/linear_regression.png
    In conversation Wednesday, 31-Aug-2016 02:08:28 EDT from social.systemreboot.net permalink
  13. arunisaac (arunisaac@social.systemreboot.net)'s status on Thursday, 30-Jun-2016 14:19:45 EDT arunisaac arunisaac
    • Mathematic
    !math #magic #humor #topology http://abstrusegoose.com/253 http://abstrusegoose.com/strips/munkres_power_activate.png
    In conversation Thursday, 30-Jun-2016 14:19:45 EDT from social.systemreboot.net permalink
  14. arunisaac (arunisaac@social.systemreboot.net)'s status on Friday, 24-Jun-2016 01:46:00 EDT arunisaac arunisaac
    • Mathematic
    cc !math
    In conversation Friday, 24-Jun-2016 01:46:00 EDT from social.systemreboot.net permalink
  15. Jens Kubieziel (qbi@quitter.se)'s status on Monday, 20-Jun-2016 11:13:11 EDT Jens Kubieziel Jens Kubieziel
    • Mathematic
    TIL: @WTiefensee bringt seinen Kindern bei, dass 1+1=3 ist. :-)
    !math #WWF16 
    In conversation Monday, 20-Jun-2016 11:13:11 EDT from quitter.se permalink
  16. Jens Kubieziel (qbi@quitter.se)'s status on Monday, 06-Jun-2016 07:49:55 EDT Jens Kubieziel Jens Kubieziel
    • Mathematic
    Ein schönes, einfaches Matherätsel von @hdambeck: https://spon.de/aeLco
    !math
    In conversation Monday, 06-Jun-2016 07:49:55 EDT from quitter.se permalink
  17. arunisaac (arunisaac@social.systemreboot.net)'s status on Friday, 03-Jun-2016 04:40:21 EDT arunisaac arunisaac
    • Mathematic
    !math https://social.systemreboot.net/url/13933
    How to assign partial credit on an exam of true-false questions? -- Terence Tao
    In conversation Friday, 03-Jun-2016 04:40:21 EDT from social.systemreboot.net permalink

    Attachments

    1. File without filename could not get a thumbnail source.
      How to assign partial credit on an exam of true-false questions?
      By Terence Tao from What's new

      Note: the following is a record of some whimsical mathematical thoughts and computations I had after doing some grading. It is likely that the sort of problems discussed here are in fact well studied in the appropriate literature; I would appreciate knowing of any links to such.

      Suppose one assigns true-false questions on an examination, with the answers randomised so that each question is equally likely to have “true” as the correct answer as “false”, with no correlation between different questions. Suppose that the students taking the examination must answer each question with exactly one of “true” or “false” (they are not allowed to skip any question). Then it is easy to see how to grade the exam: one can simply count how many questions each student answered correctly (i.e. each correct answer scores one point, and each incorrect answer scores zero points), and give that number as the final grade of the examination. More generally, one could assign some score of points to each correct answer and some score (possibly negative) of points to each incorrect answer, giving a total grade of points. As long as , this grade is simply an affine rescaling of the simple grading scheme and would serve just as well for the purpose of evaluating the students, as well as encouraging each student to answer the questions as correctly as possible.

      In practice, though, a student will probably not know the answer to each individual question with absolute certainty. One can adopt a probabilistic model, where for a given student and a given question , the student  may think that the answer to question is true with probability and false with probability , where is some quantity that can be viewed as a measure of confidence has in the answer (with being confident that the answer is true if is close to , and confident that the answer is false if is close to ); for simplicity let us assume that in ‘s probabilistic model, the answers to each question are independent random variables. Given this model, and assuming that the student wishes to maximise his or her expected grade on the exam, it is an easy matter to see that the optimal strategy for to take is to answer question true if and false if . (If , the student can answer arbitrarily.)

      [Important note: here we are not using the term “confidence” in the technical sense used in statistics, but rather as an informal term for “subjective probability”.]

      This is fine as far as it goes, but for the purposes of evaluating how well the student actually knows the material, it provides only a limited amount of information, in particular we do not get to directly see the student’s subjective probabilities for each question. If for instance answered out of questions correctly, was it because he or she actually knew the right answer for seven of the questions, or was it because he or she was making educated guesses for the ten questions that turned out to be slightly better than random chance? There seems to be no way to discern this if the only input the student is allowed to provide for each question is the single binary choice of true/false.

      But what if the student were able to give probabilistic answers to any given question? That is to say, instead of being forced to answer just “true” or “false” for a given question , the student was allowed to give answers such as “ confident that the answer is true” (and hence confidence the answer is false). Such answers would give more insight as to how well the student actually knew the material; in particular, we would theoretically be able to actually see the student’s subjective probabilities .

      But now it becomes less clear what the right grading scheme to pick is. Suppose for instance we wish to extend the simple grading scheme in which an correct answer given in confidence is awarded one point. How many points should one award a correct answer given in confidence? How about an incorrect answer given in confidence (or equivalently, a correct answer given in confidence)?

      Mathematically, one could design a grading scheme by selecting some grading function and then awarding a student points whenever they indicate the correct answer with a confidence of . For instance, if the student was confident that the answer was “true” (and hence confident that the answer was “false”), then this grading scheme would award the student points if the correct answer actually was “true”, and points if the correct answer actually was “false”. One can then ask the question of what functions would be “best” for this scheme?

      Intuitively, one would expect that should be monotone increasing – one should be rewarded more for being correct with high confidence, than correct with low confidence. On the other hand, some sort of “partial credit” should still be assigned in the latter case. One obvious proposal is to just use a linear grading function – thus for instance a correct answer given with confidence might be worth points. But is this the “best” option?

      To make the problem more mathematically precise, one needs an objective criterion with which to evaluate a given grading scheme. One criterion that one could use here is the avoidance of perverse incentives. If a grading scheme is designed badly, a student may end up overstating or understating his or her confidence in an answer in order to optimise the (expected) grade: the optimal level of confidence for a student to report on a question may differ from that student’s subjective confidence . So one could ask to design a scheme so that is always equal to , so that the incentive is for the student to honestly report his or her confidence level in the answer.

      This turns out to give a precise constraint on the grading function . If a student thinks that the answer to a question is true with probability and false with probability , and enters in an answer of “true” with confidence (and thus “false” with confidence ), then student would expect a grade of

      on average for this question. To maximise this expected grade (assuming differentiability of , which is a reasonable hypothesis for a partial credit grading scheme), one performs the usual maneuvre of differentiating in the independent variable and setting the result to zero, thus obtaining

      In order to avoid perverse incentives, the maximum should occur at , thus we should have

      for all . This suggests that the function should be constant. (Strictly speaking, it only gives the weaker constraint that is symmetric around ; but if one generalised the problem to allow for multiple-choice questions with more than two possible answers, with a grading scheme that depended only on the confidence assigned to the correct answer, the same analysis would in fact force to be constant in ; we leave this computation to the interested reader.) In other words, should be of the form for some ; by monotonicity we expect to be positive. If we make the normalisation (so that no points are awarded for a split in confidence between true and false) and , one arrives at the grading scheme

      Thus, if a student believes that an answer is “true” with confidence and “false” with confidence , he or she will be awarded points when the correct answer is “true”, and points if the correct answer is “false”. The following table gives some illustrative values for this scheme:

      Confidence that answer is “true” Points awarded if answer is “true” Points awarded if answer is “false”

      Note the large penalties for being extremely confident of an answer that ultimately turns out to be incorrect; in particular, answers of confidence should be avoided unless one really is absolutely certain as to the correctness of one’s answer.

      The total grade given under such a scheme to a student who answers each question to be “true” with confidence , and “false” with confidence , is

      This grade can also be written as

      where

      is the likelihood of the student ‘s subjective probability model, given the outcome of the correct answers. Thus the grade system here has another natural interpretation, as being an affine rescaling of the log-likelihood. The incentive is thus for the student to maximise the likelihood of his or her own subjective model, which aligns well with standard practices in statistics. From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.

      One could propose using the above grading scheme to evaluate predictions to binary events, such as an upcoming election with only two viable candidates, to see in hindsight just how effective each predictor was in calling these events. One difficulty in doing so is that many predictions do not come with explicit probabilities attached to them, and attaching a default confidence level of to any prediction made without any such qualification would result in an automatic grade of if even one of these predictions turned out to be incorrect. But perhaps if a predictor refuses to attach confidence level to his or her predictions, one can assign some default level of confidence to these predictions, and then (using some suitable set of predictions from this predictor as “training data”) find the value of that maximises this predictor’s grade. This level can then be used going forward as the default level of confidence to apply to any future predictions from this predictor.

      The above grading scheme extends easily enough to multiple-choice questions. But one question I had trouble with was how to deal with uncertainty, in which the student does not know enough about a question to venture even a probability of being true or false. Here, it is natural to allow a student to leave a question blank (i.e. to answer “I don’t know”); a more advanced option would be to allow the student to enter his or her confidence level as an interval range (e.g. “I am between and confident that the answer is “true””). But now I do not have a good proposal for a grading scheme; once there is uncertainty in the student’s subjective model, the problem of that student maximising his or her expected grade becomes ill-posed due to the “unknown unknowns”, and so the previous criterion of avoiding perverse incentives becomes far less useful.

  18. arunisaac (arunisaac@social.systemreboot.net)'s status on Wednesday, 04-May-2016 02:28:59 EDT arunisaac arunisaac
    • Mathematic
    !math Pizzas and differential geometry -- a friendly introduction to Gaussian curvature
    http://www.wired.com/2014/09/curvature-and-strength-empzeal/
    In conversation Wednesday, 04-May-2016 02:28:59 EDT from social.systemreboot.net permalink

    Attachments

    1. How a 19th Century Math Genius Taught Us the Best Way to Hold a Pizza Slice
      from WIRED
      Why does bending a pizza slice help you eat it? How does a mantis shrimp's punch use a Pringles chip? A surprising geometrical link between curvature and strength.
  19. Jens Kubieziel (qbi@quitter.se)'s status on Thursday, 10-Jul-2014 09:56:47 EDT Jens Kubieziel Jens Kubieziel
    • Mathematic
    Ich mag ja die Übersetzung »finished bodies« für endliche Körper. ;) !math
    In conversation Thursday, 10-Jul-2014 09:56:47 EDT from quitter.se permalink
  20. Jens Kubieziel (qbi@quitter.se)'s status on Saturday, 26-Apr-2014 07:52:26 EDT Jens Kubieziel Jens Kubieziel
    • Mathematic
    Das Spiel 2048 und Lamberts W-Funktion: https://loomsci.wordpress.com/2014/04/22/2048-and-the-lambert-w-function/ !math
    In conversation Saturday, 26-Apr-2014 07:52:26 EDT from quitter.se permalink

    Attachments

    1. 2048 and the Lambert W function
      By nickloomis from Adventures in Loom-Science

      A number of us at the office have been playing 2048, a simple yet challenging puzzle game. (I credit xkcd for getting me started.) It also has some fun math for us programmers, and not just because it uses powers of two. As you’ll see, predicting the level of game play from a player’s score leads to a Lambert W function, used in quantum mechanics, granular fluid flow, diode modeling, and in my own life, holography.

      The goal of 2048 is to combine like-numbered tiles to build up to a 2048-valued tile. You can slide all of the tiles on a game board in the same direction at each move, and a tile with either a 2 or a 4 is randomly placed in an open space after your move. Two neighboring tiles with the same number that slide into each other will clobber during the slide move, so that two eights will combine to form a 16-valued tile, two 16’s combine to form a 32, and so on. You are awarded the same number of points as the tiles you clobber together.

      The version of 2048 that I downloaded allows you to continue playing after reaching the 2048 tile and includes a public scoreboard. When I started playing, there was someone who had scored 79,000 points. As of mid-April, someone had reached an astounding 271,192 points. But what does that actually mean? What tile combination where they able to reach? How long did that take? That’s where the math starts!

      Three successive boards of 2048, left to right. The tiles were slid upwards in each move. After each move, a random tile was added to the board.

      It is easier to understand the sequence of moves required to reach the 2048 tile by redrawing the moves required to reach a particular tile as a graph. For example, here’s a simple graph of which tiles need to be combined to reach 8:

      Three merges are required to get to an 8-tile starting from 2-tiles.

      Computer scientists will recognize this as a binary tree. Counting up the number of merges is the same as counting the number of nodes on everything but the first level of the graph. Making a simple table shows the pattern clearly:

      In the worst case, the game randomly populates open spaces with 2-tiles, and you need 2^(n-1)-1 merges to reach the 2^n tile. In the best case, 4-tiles are placed in all the open spaces by the game, and you need 2^(n-2)-1 merges.

      On average, let’s assume that you merge one set of tiles per second*. Some moves don’t merge tiles, some moves merge several sets of tiles, some moves are faster than one second, some moves take some thinking… so on average, one second per merge is probably ball-park for someone who’s been playing 2048 for long enough. Reaching 2048 (= 2^11) requires, at minimum, 511 merges and about 9 minutes (if all 4’s are randomly placed on the board) or up to 1023 merges (if all 2’s are placed on the board) and about 17 minutes. (It took me 13.5 minutes just now, for example.)

      *If you’re starting out, one second per move may seem ludicrous. Talking to my coworkers who have been playing 2048 for long enough, they’ve all come up with heuristics for which moves and merges to make, so that a one-second-per-merge average is possible.

      The next question relates to the score: given that someone scored S on a game, what tile level did they likely reach? You score points based on the tiles which clobber together, so that clobbering two 2-tiles gets you 4 points, and clobbering two 16-tiles nets you 32 points for that move. If you graph the points out, the total score accumulated by the time you reach a certain tile becomes more obvious:

      Each merge scores the same number of points as the tiles that were combined. The top number in each box is the score for that merge, while the bottom number is the total number of points accumulated by the time you reach that level. For example, by the time you reach an 8-tile, you will have accumulated 4+4+8 = 16 points.

      Ha! In order to reach the 2^n tile, you will have scored (n-1)*2^n points (at most — and that would be the score just to build that tile). It’s worth noting that this equation assumes that all the auto-generated tiles are 2-tiles, and thus provides a maximum limit to the score.

      Here’s where the math gets interesting: try going backward and finding the tile level n corresponding to a specific score, S. If you work through the math and put in a few judicious moves (which will become useful in a moment),

      you can reach a stage where you have an equation of the form

      with

      This is where a function known as the Lambert W comes to the rescue: it is the inverse solution to this exact problem. In other words, the value of W(y) is defined so that setting x = W(y) makes the left and right sides of the equation match,

      Using the Lambert W, the tile-level-vs-score equation can be factored further to get n directly:

      This equation lets us know, to within a fairly good degree, what tile level a player reached when achieving a particular score. For example, say I got a score of S = 20492. Putting this into the equation gives n = 11.0007, which corresponds to the 2048 tile (plus a few more tiles beyond 2048). I found a score on youtube of 71,684, which gives n = 12.6, which is a 4096 tile and a 2048 tile. Another youtube video has a score of 151,796, which gives n = 13.6, an 8192 tile and a 4096 tile both. The high score on my game’s leaderboard of S =271,152 means that a Mr. Alexius Timothy managed to get a 16384 tile.

      Plots of the tile level versus score

      How long does it take to reach 151,796? If n = 13.6, that’s around 6000 merges, or 100 minutes at one merge per second at worst-case. The player in that video gets to their high score after 76 minutes. Not too bad of an estimate.

      If you don’t have Matlab handy but want to estimate how well your coworkers did, it may also help to note that the score is approximately linear with tile value as S gets very large. (This is because the expansion of the Lambert W for large arguments is given by natural logs, and taking 2^ln(…) gives an approximately linear result.) Around 20,000 points (close to the 2048 tile), the slope is about 0.1, and drops slowly to about 0.083 by 100,000 points. So if your coworker announces that he scored 55,000 points in a game, you could estimate that 55,000 * 0.09 = 4950, and he probably reached the 4096 tile.

      One of the recurring caveats is that the equations I’ve presented here are based on the assumption that the game generates all 2-tiles or 4-tiles in the blank spaces. In the actual game, there is a distribution of 2’s and 4’s. The popular version from Gabriele Cirulli has a 90% chance of generating a 2-tile and a 10% chance of generating a 4-tile, which you can see in his code:

       This means that out of every 20 tiles which are auto-generated, you can expect 18 to be 2-tiles and two to be 4-tiles. The result is that, out of every eleven 4-tiles, two are randomly generated and the rest are built through merging, as described by the equations thus far:

      Intuitively, you’d expect very similar results from Mr Cirulli’s 90-10 probability distribution as if the game had generated all 2-tiles.

      The precise answer is only a step away from the intuitive. The only difference occurs at the very top of the graph (as I’ve drawn it), where the 2-tiles merge into 4-tiles. Let’s call f_m the fractional number of merges you have in the real game (in other words, using a non-zero probability of generating 4-tiles) compared to the case where the game generates all 2-tiles. In the example above, where 18 2-tiles and two 4-tiles are generated, you “lose” two of the 11 merges you could have had if all 2-tiles had been created, and f_m is 9/11. In general, if P_2 is the probability of getting a 2-tile, then f_m is easy to find:

      It’s interesting to note that if if you used a value for P_2 that is related to a power of 2, like 1-2^-3 = 1-1/8 = 0.8750, f_m is not a power of 2; f_m = 7/9. I’d also suggest that Mr Cirulli’s choice of using 0.9 was interesting in that it destroys some of the power-of-2 symmetry.

      We can use f_m to modify the equations and find the expected number of merges or points. Again noting that f_m affects the number of merges or points in only the top-most rows of the graph, we can divide the graphs into two sections: the top row and the bottom pyramid. Looking first to the score, each row of the tree contributes 2^n points to the overall total when trying to reach the 2^nth tile, so that you have 2^n(n-2) points from everything except the top-most row of 2-tiles merging into 4-tiles. The number of points you would expect from the 2-tiles merging if then f_m*2^n, and the expected point total is

      The number of merges from the 4-tiles on down to the 2^nth tile is 2^(n-2)-1. If all the auto-generated tiles were 2-tiles, you would need an additional 2^(n-2) merges to convert all of the 2-tiles into 4’s. The expected number of merges is then

      If the game randomly generates almost all 2-tiles, then f_m is almost 1, and the equations reduce back down. You can also see that the score, S, is almost constant as f_m changes, since it is dominated by the (n-2) term as n becomes large. Given the number of different ports of 2048, that’s a good thing: even if my port uses a different P_2 (and thus a different f_m), I can still compare my scores against my coworkers’ scores and expect them to compare well. It doesn’t matter that CV and I might be playing different versions, he’s still got a higher score.

      Before closing out, I did want to talk a bit about the Lambert W function — or at a minimum, point you, dear reader, to more reading if you’re curious. It has a number of unique properties, especially relating to e, pi, 0, and interesting integrals; check the Wikipedia article for a listing. The function comes up in quantum mechanics as the energy of an electron in a conductor sandwiched between two other conductors which almost touch. The Lambert W gives the values for digit-shifting using powers. (If you have ideas of how digit-shifting could be used, I’d like to hear them.) The function generates the so-called Omega constant. The current in a diode has the Lambert W as part of its solution. It’s also worth noting that, since W is related to exponentials, it can also be used with complex numbers.

      Finally, I wanted to pass on a link to an interesting port of 2048, Fe[26], where you fuse elements together to create higher “valued” isotopes — starting from hydrogen and working up to iron. Just beware the half-lives of the unstable isotopes and stable atoms which can’t be used…

  • Before
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

Jonkman Microblog is a social network, courtesy of SOBAC Microcomputer Services. It runs on GNU social, version 1.2.0-beta5, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All Jonkman Microblog content and data are available under the Creative Commons Attribution 3.0 license.

Switch to desktop site layout.