Jonkman Microblog
  • Login
Show Navigation
  • Public

    • Public
    • Network
    • Groups
    • Popular
    • People

Notices by @mcscx2@quitter.no (mcscx2@quitter.no), page 14

  1. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Monday, 28-May-2018 21:44:43 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    in reply to
    • Sorokin Alexei
    • Annah
    • lakwnikos
    • @mcscx2@quitter.no
    • abjectio
    RP @maiyannah : @knuthollund @lakwnikos @mcscx2 Oh @xrevan86 had a patch fir some of these really slow joins. I have it in postActiv
    In conversation Monday, 28-May-2018 21:44:43 EDT from quitter.no permalink
  2. abjectio (knuthollund@quitter.no)'s status on Monday, 28-May-2018 17:33:47 EDT abjectio abjectio
    in reply to
    • lakwnikos
    • @mcscx2@quitter.no
    @mcscx2 @lakwnikos Ref previous comment - maybe "LIMIT ROWS EXAMIED" - should be implemented? - https://mariadb.com/kb/en/library/limit-rows-examined/
    In conversation Monday, 28-May-2018 17:33:47 EDT from quitter.no permalink Repeated by mcscx2

    Attachments

    1. Unable to connect to tcp://mariadb.comaskmonty-logo.png:80. Error: php_network_getaddresses: getaddrinfo failed: Name or service not known
      LIMIT ROWS EXAMINED
      from MariaDB KnowledgeBase
      Means to terminate execution of SELECTs that examine too many rows
  3. abjectio (knuthollund@quitter.no)'s status on Monday, 28-May-2018 17:13:28 EDT abjectio abjectio
    in reply to
    • lakwnikos
    • @mcscx2@quitter.no
    @mcscx2 @lakwnikos don't think there is a ddos. A classical slow query is the following example (examine 3mill rows). 
    "# Query_time: 16.042286 Lock_time: 0.000140 Rows_sent: 104 Rows_examined: 3776090
    # Rows_affected: 0
    SET timestamp=1527533585;
    SELECT id FROM notice
    WHERE ( notice.created > "2018-01-14 23:36:01" ) AND ( notice.id IN (SELECT notice_id FROM reply WHERE profile_id=45991) OR notice.profile_id IN (SELECT subscribed FROM subscription WHERE subscriber=45991) OR notice.id IN (SELECT notice_id FROM group_inbox WHERE group_id IN (SELECT group_id FROM group_member WHERE profile_id=45991))OR notice.id IN (SELECT notice_id FROM attention WHERE profile_id=45991) )
    ORDER BY notice.id DESC
    LIMIT 0, 200;"
    Also query's from the search field do a "LIKE '%yoursearch%" - which takes a long time - like does not use indexes. However, the select above (example 1) is the one reporting often as the slow queries.
    In conversation Monday, 28-May-2018 17:13:28 EDT from quitter.no permalink Repeated by mcscx2
  4. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Monday, 28-May-2018 19:03:52 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    in reply to
    • GNU Social
    • abjectio
    @knuthollund maybe other !gnusocial admins could check one their instances whether they have those slow queries, too.

    Because I wonder why this seems to affect only quitter.no (apart from also quitter.se which is currently down), _even though_ quitter.no has already such powerful hardware like SSD, 12 GB RAM and the whole database kept in memory. I would expect there are at least some less powerful instances around, but I'm still still havent found other instances affected by this #every-few-minutes-unresponsive-for30-seconds-issue.
    In conversation Monday, 28-May-2018 19:03:52 EDT from quitter.no permalink
  5. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Monday, 28-May-2018 18:48:34 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    in reply to
    • abjectio
    @knuthollund interesting. Is that a query for an id of a notice which has been created after "2018-01-14 23:36:01" (plus some other criteria)? I wonder if that is already strange?

    How often does such a slow query occur? In practise the non-responding conditions come every 2-5 minutes and always seem to last for ~29 or 30 seconds.

    (speculation from here:)
    I suspect that the non-responsiveness is caused by something out there in the fediverse, _maybe_ caused by the "thread completion" feature other ostatus implementations seem to have.

    Example: I create a post, but I don't have any subscribers on most of the thousands of mastodon instances, so my post would normally not federate to there. Then one well-known user (with followers on all the 1000 instances) replies to my post. Consequence: The 1000 mastodon instances might ask quitter.no (all at the same time) "hey quitter.no, give us that first post to which that well-known user replied to"
    In conversation Monday, 28-May-2018 18:48:34 EDT from quitter.no permalink
  6. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Monday, 28-May-2018 15:04:31 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    in reply to
    • Michael Vogel
    @heluecht wenn er eine Rede hält sieht er noch schlimmer aus. Aggressiv wirkende Mimik.
    In conversation Monday, 28-May-2018 15:04:31 EDT from quitter.no permalink
  7. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Monday, 28-May-2018 14:41:34 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    in reply to
    • Biene Zwo
    @bienezwo aber die Umlaut-Freiheit hat ihren Preis: quitter\.no ist alle paar Minuten für 30 Sekunden nicht erreichbar.
    In conversation Monday, 28-May-2018 14:41:34 EDT from quitter.no permalink
  8. Digitalcourage e.V. (digitalcourage@chaos.social)'s status on Monday, 28-May-2018 13:05:27 EDT Digitalcourage e.V. Digitalcourage e.V.

    Die Digitalcourage-Hochschulgruppe schreibt im Fediverse:

    „Panikmache und Falschinfos statt Datenschutz und Informationssicherheit: Ein Offizier der Bundeswehr erzählt vom Darkweb. Unsere Stellungnahme zum Vortrag vom 14. Mai an der Uni Bielefeld:“

    https://chaos.social/web/statuses/100106770788006321

    In conversation Monday, 28-May-2018 13:05:27 EDT from chaos.social permalink Repeated by mcscx2
  9. sö (soe@social.tchncs.de)'s status on Monday, 28-May-2018 08:08:48 EDT sö sö
    • Digitalcourage e.V.

    @Digitalcourage hatte dieser Tage den Film "Das #Microsoft Dilemma" empfohlen. Nachdem ich mir den jetzt angesehen habe möchte ich das nochmal pushen.

    Der Film schneidet das Problem von #closedSource Software an und zeigt ganz nebenbei noch ein paar grundsätztliche Schwächen unseres politischen Systems auf.

    Er ist so aufbereitet, dass man ihn durchaus aus der Blase raus an Mutti oder den BWL-Kollegen verlinken kann.

    Link: http://p.dw.com/p/2xUJj

    #openSource #LiMux #Lobbyismus

    In conversation Monday, 28-May-2018 08:08:48 EDT from social.tchncs.de permalink Repeated by mcscx2
  10. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Monday, 28-May-2018 09:27:01 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    • Stephan Maus
    @ouroboros gibts eigentlich bei Youtube schon Clips vom Geburtsvorgang? Kommt bestimmt auch noch. Und dann können die Zuschauer up- und downvoten und kommentieren.
    In conversation Monday, 28-May-2018 09:27:01 EDT from quitter.no permalink
  11. ˗ˏˋ Liaizon Wakest ˎˊ˗ (wakest@mastodon.social)'s status on Sunday, 27-May-2018 19:01:46 EDT ˗ˏˋ Liaizon Wakest ˎˊ˗ ˗ˏˋ Liaizon Wakest ˎˊ˗
    • Jason Scott

    is anyone working on saving or archiving any of the content from gnusocial.de? they are deleting everything in a few days. its one of the first large instances of the #fediverse and I am sure there is many important discussions that have taken place there that it would be historically relevant to keep. @textfiles ?

    In conversation Sunday, 27-May-2018 19:01:46 EDT from mastodon.social permalink Repeated by mcscx2
  12. Elias Schwerdtfeger (goebelmasse@quitter.no)'s status on Sunday, 27-May-2018 17:53:25 EDT Elias Schwerdtfeger Elias Schwerdtfeger
    Tür – https://quitter.no/attachment/1778199 #Hannover #Linden #Ihmezentrum #Ruine #Zerfall #Graffiti #Foto
    In conversation Sunday, 27-May-2018 17:53:25 EDT from quitter.no permalink Repeated by mcscx2
  13. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Sunday, 27-May-2018 19:12:15 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    • Marcus
    • Patrick Breyer
    @patrickbreyer ich komme – mit etwas Verspätung – zurück auf die Frage nach GNUsocial-Instanzen, die keine IP-Adressen loggen. Leider kann ich dir jetzt quitter.se und quitter.no wegen anhaltender technischer Schwierigkeiten aktuell doch nicht empfehlen.

    Jetzt kommt mir als nächstes gnusocial.ch in den Sinn...

    Hallo @marcus: @patrickbreyer sucht wegen der Schließung von gnusocial.de eine neue Instanz, die keine IP-Adressen loggt. Ist das bei euch so?
    In conversation Sunday, 27-May-2018 19:12:15 EDT from quitter.no permalink
  14. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Sunday, 27-May-2018 16:38:25 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    in reply to
    • GNU Social
    • lakwnikos
    • @mcscx2@quitter.no
    • abjectio
    @lakwnikos @knuthollund Another thought:
    3) Could this be a denial-of-service attack against quitter.se and quitter.no?? Or some misconfigured other instance flooding q.no and q.se??

    Because: Some 3 weeks ago I told @knuthollund on #IRC about the issue and he said he would have a look. Soon after that the problem was completely gone and quitter.no worked flawlessly for at least the rest of the day! And moreover: quitter.se also worked flawlessly! But later it turned out @knuthollund didn't actually do anything at that point, not even restart anything.)

    So I think that something on the fediverse network must have changed within those couple of hours. I think the cause of the problem is somewhere in the internet and that cause did stop its weird behaviour against quitter.se and quitter.no just for a while. Too bad the problem came back the next day.

    I think one could check the logs of the webserver or use wireshark to look out for weird/excessive traffic.
    !gnusocial
    In conversation Sunday, 27-May-2018 16:38:25 EDT from quitter.no permalink
  15. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Sunday, 27-May-2018 16:36:51 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    • GNU Social
    • lakwnikos
    • @mcscx2@quitter.no
    • hannes pannes
    • abjectio
    @lakwnikos Regarding the "not-responding database server every few minutes" phenomenon I really wonder what the cause of it could be.

    1) I think only quitter.se and quitter.no are affected. I did some research and asked on IRC but couldn't find any other instance with this problem yet

    At https://fediverse.network/quitter.no we can see a visualisation of the outages: https://quitter.no/attachment/1778104 . I checked a couple of other instances and there was none with a pattern like this.

    2) so why only quitter.se and quitter.no? They are independant instances. Or is there some config setting that quitter.no got from quitter.se back when .no was created? Maybe @knuthollund has an idea?

    @knuthollund @hannes
    [sorry for repost but I missed adding the !gnusocial tag]
    In conversation Sunday, 27-May-2018 16:36:51 EDT from quitter.no permalink
  16. @mcscx2@quitter.no (mcscx2@quitter.no)'s status on Sunday, 27-May-2018 16:17:11 EDT @mcscx2@quitter.no @mcscx2@quitter.no
    in reply to
    • lakwnikos
    • @mcscx2@quitter.no
    • hannes pannes
    • abjectio
    @knuthollund @lakwnikos Besides: as a user I can still live with issues like this. GNUsocial is a grassroot network for me and I like instances run by common people I like, even if there are some issues occasionally. It doesn't need to be "professional"! @hannes
    In conversation Sunday, 27-May-2018 16:17:11 EDT from quitter.no permalink
  • After
  • Before
  • Help
  • About
  • FAQ
  • TOS
  • Privacy
  • Source
  • Version
  • Contact

Jonkman Microblog is a social network, courtesy of SOBAC Microcomputer Services. It runs on GNU social, version 1.2.0-beta5, available under the GNU Affero General Public License.

Creative Commons Attribution 3.0 All Jonkman Microblog content and data are available under the Creative Commons Attribution 3.0 license.

Switch to desktop site layout.