Author: sjl99ux5ojzh

  • curry

    Curry

    Convenient implementation of function
    currying and partial application.

    Installation

    Library can be installed into any PHP application:

    $ composer require phpfn/curry

    In order to access library make sure to include vendor/autoload.php
    in your file.

    <?php
    
    require __DIR__ . '/vendor/autoload.php';

    Usage

    Left currying

    The left currying can be perceived as adding arguments to an array.
    And then applying this array of arguments to the function.

    The left currying returns the partially applied function
    from the first argument.

    $fn = \curry(function($a, $b) { return $a + $b; });
    
    $fn(3);      // ($a = 3) + ($b = ?)
    $fn(3)(4);   // ($a = 3) + ($b = 4)

    Right currying

    At the same time, right-hand currying can be
    perceived as adding arguments to the right.

    $fn = \rcurry(function($a, $b) { return $a + $b; });
    
    $fn(3);      // ($a = ?) + ($b = 3)
    $fn(3)(4);   // ($a = 4) + ($b = 3)

    Partial application

    Partial application is when you can specify completely
    random arguments, skipping unnecessary using placeholder _.

    $fn = \curry(function($a, $b, $c) { return $a + $b * $c; });
    
    $fn = $fn(_, 3, 4); // ($a = ?)  + ($b = 3) * ($c = 4)
    echo  $fn(42);      // ($a = 42) + ($b = 3) * ($c = 4)

    $fn = \curry(function($a, $b, $c) { return $a + $b * $c; });
    
    $fn = $fn(_, 3, _); // ($a = ?)  + ($b = 3) * ($c = ?)
    $fn->lcurry(42);    // ($a = 42) + ($b = 3) * ($c = ?)
    $fn->rcurry(23);    // ($a = ?)  + ($b = 3) * ($c = 23)

    $fn = \curry(function($a, $b, $c) { return $a + $b * $c; });
    
    $sum  = $fn(7, 9);    // 7 + 9 * ?
    $sum(6);              // 7 + 9 * 6 
    
    $mul  = $fn(_, 7, 9); // ? + 7 * 9
    $mul(6);              // 6 + 7 * 9
    
    $test = $fn(_, 7, _); // ? + 7 * ?
    $test(6);             // 6 + 7 * ? 
    
    $test = $fn(_, 7);    // ? + 7 * ?
    $test->rcurry(6);     // ? + 7 * 6 

    Api

    Functions

    • lcurry(callable $fn, ...$args): Curried or curry

    Left currying (like array_push arguments)

    • rcurry(callable $fn, ...$args): Curried

    Right currying

    • uncurry(callable $fn): mixed

    Returns result or partially applied function

    Curried

    • $fn->__invoke(...$args) or $fn(...$args)

    The magic method that allows an object to a callable type

    • $fn->lcurry(...$args)

    A method that returns a new function with left currying

    • $fn->rcurry(...$args)

    A method that returns a new function with right currying

    • $fn->reduce()

    Reduction of the composition of functions. Those. bringing the function to a single value – the result of this function.

    • $fn->uncurry()

    An attempt is made to reduce, or bring the function to another, which will return the result

    • $fn->__toString()

    Just a function dump

    Visit original content creator repository
    https://github.com/phpfn/curry

  • ndsm

    Code description

    Code computes the potential (current-free) magnetic field on a rectilinear mesh in a Cartesian box, given
    the normal component of the magnetic field on each boundary face. The solution is computed using geometric multrigrid applied
    to a finite-difference scheme. The method computes the magnetic field via a vector potential formulation. The code returns both the vector
    potential in Coulomb gauge and the corresponding magnetic field.

    The code uses a second-order finite-difference scheme for the discretization. In principle, this
    means that the numerical truncation error should decrease as the square of the mesh spacing.

    The backbone of the code is a set of modules for solving Poisson’s equation in N dimensions
    using geometric multigrid. This multigrid solver was written first, and then the vector potential
    code was added later.The name of the code NDSM is derived from the original set of modules, i.e.
    N-Dimensional Solver Multigrid (NDSM). The vector-potential module, however, is specifically
    designed for 3D, but leverages the more general N-dimensional backend.

    Code is written in Fortran 2003 and tested using the gfortran 8.3.0 compiler. It has only been
    tested on a Linux platform.

    An earlier version of this code was used and is decribed in the paper Yang K.E., Wheatland M.S., and Gilchrist S.A.: 2020, ApJ,984,151.
    Paper DOI: 10.3847/1538-4357/ab8810

    Mesh and dimensions

    The mesh is rectilinear, i.e. it is described by three mesh vectors x,y,z. These are assumed
    to have fixed spacing. The code makes no explicit assumptions about the units of either B or A, although the
    length scales are non-dimensional.

    Vector Potential Gauge

    The vector potential is computed in the Coulomb gauge. However, note that in a box,
    the Coulomb gauge is not necessarily unique when the boundary conditions are on the normal component
    of the magnetic field. The NDSM code makes a particular choice in resolving this ambiguity. See
    the paper and the notes for more details.

    Convergence

    The multigrid method arrives at a solution via iteration.
    When convergence is poor, the output of the code may not accurately represent the
    solution to the underlying boundary-value problem (see ndsm_notes.pdf for details of the BVP).
    This has several consequences. Firstly, the magnetic field may not be a potential field, and significant electric
    currents may exist within the volume. Secondly, the normal component of the magnetic
    field may not match the normal component specified as boundary conditions.

    Metric

    Two metrics are available for measuring the convergence of the solution:
    the max or mean difference between iterations. By default, the max is used.

    The max is sensitive to failure of convergence at any point, and therefore may be inappropriate
    for many practical problems, but is useful for testing. The mean is a
    more robust convergence metric and may be more appropriate for practical problems.

    Setting mean=True, will use the mean rather than the max.

    Tolerances

    The code has two tolerance parameters that determine when to stop
    iterating.

    vc_tol

    The V cycle iteration stops when the max/mean difference is less than
    vc_tol. If vc_tol is large, the code may return quickly, but the solution
    may not be an accurate solution of the BVP.

    ex_tol

    The multrigrid method requires solution of a BVP on the coarsest mesh.
    This is solved via relaxation. The relaxation stops when the change in
    solution is less than ex_tol. Setting this to a large value will
    result in inefficient V cycles, because the BVP is not being accurately
    solved at each V cycle iteration.

    Choice of vc_tol and ex_tol

    When testing the code on analytic solutions, both vc_tol and ex_tol
    can be set to very small values. The default values defined in ndsm.py
    reflect values used for testing.

    For some practical problems, the change in solution between iterations
    may never reach the desired value of vc_tol: the solution is not improving
    with additional V cycles. In this case, the iteration
    will run until ncycles_max is reached. This may take a long time
    depending on how ncycles_max is chosen. A warning will be printed
    if the code returns without achieving vc_tol.

    Setting a large value for vc_tol (and ex_tol) may prevent the
    code from running to ncycles_max, but a large value of vc_tol
    in particular will mean the solutions is poorly converged: the numerical
    solution is not an accurate solution of the underlying boundary-value problem.

    Compile shared library

    The core Fortran code builds a shared library.

    Running make will build the shared library, called ndsm.so by default.

    OpenMP

    The code is parallelized using the OpenMP standard. However, it should compile and run without OpenMP,
    it will just be very slow on a multicore machine.

    REAL and INTEGER Types

    The core Fortran modules are written with a real type defined in NDSM_ROOT as REAL(FP). By default
    this is set to C_DOUBLE. This can be changed to any supported Fortran real type without
    breaking anything in the Fortran modules, however the Python interface only works with a real type
    that is intercompatible with C_DOUBLE.

    Similarly, the basic integer type used throughout the code is INTEGER(IT), with IT = C_INT64_T.
    This again can be changed without resulting compiler errors. However, making the int size too small
    may lead to overflow if large meshes are used, since the total number of mesh points is stored as a signed
    Fortran integer. In addition, changing IT will break the Python wrapper.

    Python

    The shared library can be called via the ndsm.py module. The module calls the subroutines
    in the shared library using the Python ctypes module. The shared library needs to be compiled
    first and either exist in sys.path, or else the explicit path to the shared library needs to
    be passed as an argument to the function (see the docstring).

    The basic Python module only requires numpy and ctypes. Some of the tests require more
    modules, e.g. matplotlib.

    Tests

    The repository contains code for running a number of integration and unit tests. Some are
    written in Fortran, while others are written in Python. The main integration test is designed
    to demonstrate that the truncation error has the correct scaling with mesh spacing. This is
    a basic test of correctness for the method.

    The truncation error is estimated by applying the code to a known analytic test case and computing metrics
    for the difference between the numerical and analytic solutions. The error metrics used are the max.
    and mean magnitude of the difference between the numerical and analytic vector fields. For a correctly
    implemented second-order scheme, (generally) both these metrics should decrease with the square of the mesh spacing
    (for a uniform mesh). The max. error in particular may not achieve second order scaling for certain problems.

    A more complete description of the testing and results is included in the notes.

    Visit original content creator repository
    https://github.com/sag2021/ndsm

  • terraform-aws-artifactory-oss

    terraform-aws-artifactory-oss

    Build Status Latest Release GitHub tag (latest SemVer) Terraform VersionInfrastructure Tests pre-commit checkov Infrastructure Tests

    Terraform module –


    It’s 100% Open Source and licensed under the APACHE2.

    Usage

    This is just a very basic example using Bitnamis AMI.

    alt text

    Copy the example or just include module.art.tf from this repository as a module in your existing Terraform code:

    module "art" {
      source             = "JamesWoolfenden/artifactory-oss/aws"
      version            = "0.1.0"
      common_tags        = var.common_tags
      instance_type      = var.instance_type
      key_name           = var.key_name
      vpc_id             = var.vpc_id
      ssl_certificate_id = var.ssl_certificate_id
      sec_group_name     = var.sec_group_name
      allowed_cidr       = var.allowed_cidr
      subnet_id          = var.subnet_id
      ssh_cidr           = var.ssh_cidr
      record             = var.record
      zone_id            = var.zone_id
    }

    Costs

    Monthly cost estimate
    
    Project: .
    
     Name                                                 Monthly Qty  Unit         Monthly Cost
    
     module.art.aws_elb.service_elb
     ├─ Classic load balancer                                     730  hours              $21.46
     └─ Data processed                                    Cost depends on usage: $0.0084 per GB
    
     module.art.aws_instance.art
     ├─ Instance usage (Linux/UNIX, on-demand, t2.small)          730  hours              $18.98
     ├─ EC2 detailed monitoring                                     7  metrics             $2.10
     └─ root_block_device
        └─ Storage (general purpose SSD, gp2)                     100  GB-months          $11.60
    
     PROJECT TOTAL                                                                        $54.14
    

    Requirements

    No requirements.

    Providers

    Name Version
    aws n/a
    local n/a
    tls n/a

    Modules

    No modules.

    Resources

    Name Type
    aws_elb.service_elb resource
    aws_instance.art resource
    aws_key_pair.art resource
    aws_route53_record.www resource
    aws_security_group.art resource
    aws_security_group.elb resource
    local_file.private_ssh resource
    local_file.public_ssh resource
    tls_private_key.ssh resource
    aws_ami.art data source
    aws_ebs_default_kms_key.current data source

    Inputs

    Name Description Type Default Required
    allowed_cidr n/a list(any) n/a yes
    common_tags Implements the common_tags scheme map(any) n/a yes
    instance_type Instance type for your Artifactory instance string "t2.small" no
    key_name n/a string n/a yes
    record The DNS name for Route53 string n/a yes
    sec_group_name n/a string n/a yes
    ssh_cidr n/a list(any) n/a yes
    ssl_certificate_id Your SSL certificate ID from ACM to add to your Load balancer string n/a yes
    subnet_id Your Subnets… string n/a yes
    vpc_id n/a string n/a yes
    zone_id The ZOne to use for your DNS record string n/a yes

    Outputs

    Name Description
    elb n/a
    instance n/a
    record n/a

    Policy

    The Terraform resource required is:

    resource "aws_iam_policy" "terraform_pike" {
      name_prefix = "terraform_pike"
      path        = "https://github.com/"
      description = "Pike Autogenerated policy from IAC"
    
      policy = jsonencode({
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "ec2:AuthorizeSecurityGroupEgress",
                    "ec2:AuthorizeSecurityGroupIngress",
                    "ec2:CreateKeyPair",
                    "ec2:CreateSecurityGroup",
                    "ec2:CreateTags",
                    "ec2:DeleteKeyPair",
                    "ec2:DeleteSecurityGroup",
                    "ec2:DeleteTags",
                    "ec2:DescribeAccountAttributes",
                    "ec2:DescribeImages",
                    "ec2:DescribeInstanceAttribute",
                    "ec2:DescribeInstanceCreditSpecifications",
                    "ec2:DescribeInstanceTypes",
                    "ec2:DescribeInstances",
                    "ec2:DescribeKeyPairs",
                    "ec2:DescribeNetworkInterfaces",
                    "ec2:DescribeSecurityGroups",
                    "ec2:DescribeTags",
                    "ec2:DescribeVolumes",
                    "ec2:GetEbsDefaultKmsKeyId",
                    "ec2:ImportKeyPair",
                    "ec2:ModifyInstanceAttribute",
                    "ec2:MonitorInstances",
                    "ec2:RevokeSecurityGroupEgress",
                    "ec2:RevokeSecurityGroupIngress",
                    "ec2:RunInstances",
                    "ec2:StartInstances",
                    "ec2:StopInstances",
                    "ec2:TerminateInstances",
                    "ec2:UnmonitorInstances"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor1",
                "Effect": "Allow",
                "Action": [
                    "elasticloadbalancing:AddTags",
                    "elasticloadbalancing:AttachLoadBalancerToSubnets",
                    "elasticloadbalancing:CreateLoadBalancer",
                    "elasticloadbalancing:CreateLoadBalancerListeners",
                    "elasticloadbalancing:DeleteLoadBalancer",
                    "elasticloadbalancing:DescribeLoadBalancerAttributes",
                    "elasticloadbalancing:DescribeLoadBalancers",
                    "elasticloadbalancing:DescribeTags",
                    "elasticloadbalancing:ModifyLoadBalancerAttributes",
                    "elasticloadbalancing:RemoveTags"
                ],
                "Resource": "*"
            },
            {
                "Sid": "VisualEditor2",
                "Effect": "Allow",
                "Action": [
                    "route53:ChangeResourceRecordSets",
                    "route53:GetChange",
                    "route53:GetHostedZone",
                    "route53:ListResourceRecordSets"
                ],
                "Resource": "*"
            }
        ]
    })
    }
    

    Related Projects

    Check out these related projects.

    Help

    Got a question?

    File a GitHub issue.

    Contributing

    Bug Reports & Feature Requests

    Please use the issue tracker to report any bugs or file feature requests.

    Copyrights

    Copyright © 2019-2022 James Woolfenden

    License

    License

    See LICENSE for full details.

    Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

    https://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

    Contributors

    James Woolfenden
    James Woolfenden

    Visit original content creator repository https://github.com/JamesWoolfenden/terraform-aws-artifactory-oss
  • lua-icu-date-ffi

    lua-icu-date-ffi

    LuaJIT FFI bindings to ICU (International Components for Unicode). ICU provides a robust date and time library that correctly and efficiently handles complexities of dealing with dates and times:

    • Date and time formatting
    • Date and time parsing
    • Date and time arithmetic (adding and subtracting)
    • Time zones
    • Daylight saving time
    • Leap years
    • ISO 8601 formatting and parsing

    Usage

    local icu_date = require "icu-date-ffi"
    
    -- Create a new date object.
    local date = icu_date:new()
    
    -- You can get and set the date's timestamp.
    date:get_millis() -- Defaults to current time.
    date:set_millis(1507836727123)
    
    -- You can generate an ISO 8601 formatted string.
    local format_iso8601 = icu_date.formats.iso8601()
    date:format(format_iso8601) -- "2017-10-12T19:32:07.123Z"
    
    -- You can generate a custom formatted string.
    local format_custom = icu_date.formats.pattern("EEE, MMM d, yyyy h:mma zzz")
    date:format(format_custom) -- "Thu, Oct 12, 2017 7:32PM GMT"
    
    -- You can parse a string using various formats.
    local format_date = icu_date.formats.pattern("yyyy-MM-dd")
    date:parse(format_date, "2016-09-18")
    date:format(format_iso8601) -- "2016-09-18T00:00:00.000Z"
    
    -- You can extract specific date or time fields.
    date:get(icu_date.fields.YEAR) -- 2016
    date:get(icu_date.fields.WEEK_OF_YEAR) -- 39
    
    -- You can set specific date or time fields.
    date:set(icu_date.fields.YEAR, 2019)
    date:format(format_iso8601) -- "2019-09-18T00:00:00.000Z"
    
    -- You can perform date or time arithmetic,
    date:add(icu_date.fields.MONTH, 4)
    date:format(format_iso8601) -- "2020-01-18T00:00:00.000Z"
    date:add(icu_date.fields.HOUR_OF_DAY, -2)
    date:format(format_iso8601) -- "2020-01-17T22:00:00.000Z"
    
    -- Timezones are fully supported.
    date:get_time_zone_id() -- "UTC"
    date:set_time_zone_id("America/Denver")
    date:format(format_iso8601) -- "2020-01-17T15:00:00.000-07:00"
    
    -- Daylight saving time is also fully supported.
    date:set_millis(1509862770000)
    date:format(format_iso8601) -- "2017-11-05T00:19:30.000-06:00"
    date:add(icu_date.fields.HOUR_OF_DAY, 5)
    date:format(format_iso8601) -- "2017-11-05T04:19:30.000-07:00"

    Performance

    API

    new

    syntax: date = icu_date.new(options)

    Create and return a new date object.

    The options table accepts the following fields:

    • zone_id: (default: UTC)
    • locale: (default: en_US)
    • calendar_type: (default: calendar_types.GREGORIAN)

    calendar_types

    syntax: fields = icu_date.calendar_types

    fields

    syntax: fields = icu_date.fields

    attributes

    syntax: fields = icu_date.attributes

    formats.pattern

    syntax: format = icu_date.formats.pattern(pattern)

    formats.iso8601

    syntax: format = icu_date.formats.iso8601()

    A shortcut for icu_date.formats.pattern("yyyy-MM-dd'T'HH:mm:ss.SSSZZZZZ").

    date:get

    syntax: date:get(field)

    date:set

    syntax: date:set(field, value)

    date:add

    syntax: date:add(field, amount)

    date:clear

    syntax: date:clear()

    date:clear_field

    syntax: date:clear_field(field)

    date:get_millis

    syntax: date:get_millis()

    date:set_millis

    syntax: date:set_millis(value)

    date:get_attribute

    syntax: date:get_attribute(attribute)

    date:set_attribute

    syntax: date:set_attribute(attribute, value)

    date:format

    syntax: date:format(format)

    date:parse

    syntax: date:parse(format, text, options)

    The options table accepts the following fields:

    • clear: (default: true)

    Visit original content creator repository
    https://github.com/GUI/lua-icu-date-ffi

  • trading-atm-documentation

    TradingATM

    Introduction

    TradingATM is an innovative social copy trading platform designed to facilitate seamless copy trading across multiple trading platforms, including TradeLocker, MetaTrader 4 (MT4), and MetaTrader 5 (MT5). This Software as a Service (SaaS) platform allows users to register their trading accounts as “masters,” enabling the tracking of their trading activities, which are then displayed in both chart and numerical formats. Other users can view the performance of all master accounts and choose to copy trades from profitable masters by registering their accounts as “copiers.” The platform also provides tools for copiers to monitor their account performance effectively.

    Images

    Images to showcase the site.

    Workflow Video

    To understand how TradingATM works, watch our workflow video:

    Technical Implementation

    Homepage

    • Built on WordPress, providing a user-friendly interface for information dissemination and user engagement.

    Dashboard Frontend

    • Developed using React, ensuring a dynamic and responsive user experience for both master and copier accounts.

    Main Backend

    • Leveraging Node.js for efficient server-side operations, handling user registrations, transactions, and data management.

    MetaTrader API Backend

    • Implemented using ASP.NET, facilitating robust integration with MT4 and MT5 for real-time trade execution and account management.

    Payment Integration

    To enhance user accessibility, TradingATM incorporates CryptoChill, a third-party cryptocurrency payment platform. This integration allows users to make payments using various cryptocurrencies, aligning with the growing trend of digital asset utilization in financial transactions.

    Role and Responsibilities

    In this project, my primary responsibilities include backend development, where I focus on creating and maintaining the server-side functionalities that support the core operations of the platform. Additionally, I actively participate in the frontend dashboard development to ensure a cohesive user experience across the application.

    Conclusion

    TradingATM represents a significant advancement in the realm of copy trading by providing a comprehensive platform that bridges various trading environments. With its focus on user engagement, real-time performance tracking, and cryptocurrency payment options, it positions itself as a leader in the evolving landscape of social trading solutions.

    Live Version Link

    https://tradingatmstg.wpenginepowered.com/

    Code Privacy

    The code repository is not publicly accessible as the project is currently live and maintained privately.

    Visit original content creator repository https://github.com/monsterdev95/trading-atm-documentation
  • Reactor-utils

    Reactor-utils

    Extra reactor utils that aren’t included in either core or addons

    ReactorUtils#intersect

    Intersects multiple publishers. All distinct identical elements are emitted.

    import com.jidda.reactorUtils;
    
    	Flux<String> f1 = Flux.just("A","B","C");
    	Flux<String> f2 = Flux.just("D","C","A");
    	Flux<String> f3 = Flux.just("F","B","D");
    	ReactorUtils.intersect(f1,f2).subscribe() //Emits C,A
       
    	//Can also be used with prefetch value, default is Unbounded
    	ReactorUtils.intersect(f1,f2,32).subscribe() //Emits C,A
    
    	//Can also be used with list of publishers
    	ReactorUtils.intersect(Arrays.asList(f1,f2,f3)).subscribe() //Emits C,A,B,D

    ReactorUtils#joinIf

    Joins two publishers values, emits based upon filter condition.

    import com.jidda.reactorUtils;
    
        Flux<String> f1 = Flux.just("A","B","C");
        Flux<Integer> f2 = Flux.just(1,5,2);
        final String alphabet = "ABC";
    
        ReactorUtils.joinIf(f1,
                f2,
                (a,b) -> a,
                (a,b) -> b.equals(alphabet.indexOf(a)+1)
        ).subscribe() // Emits A,B
    )

    Important:
    Unlike Flux#join, the leftEnd and rightEnd functions have not yet been implemented so the two joined fluxes must terminate on their own


    Contributing

    Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

    Please make sure to update tests as appropriate.

    Visit original content creator repository
    https://github.com/Jiddak/Reactor-utils

  • AvalonMudClient

    Avalon Mud Client

    A Windows MUD (multi-user dimension) client that allows you to connect to and play any number of text based online multi user games.

    Install via the Windows Store

    Screenshots

    alt text

    Documentation

    Info

    • Language: C# / WPF for .Net 8
    • OS Support: Windows 11, 10 (1607+), 8.1, 7
    • OS Support for Windows Store Version: Windows 11, Windows 10, version 1809 (10.0; Build 17763)

    Key Features

    • Aliases
    • Triggers (simple and regular expression)
    • Macros
    • Package Manager for installing packages that are built for a specific game.
    • LUA (LUA can be inlined as the output of an alias or a trigger)
    • Colored syntax editor for LUA with intellisense (code completion) for all of the extended API’s.
    • 4K monitor support / responsive UI design.
    • Touch screen friendly.
    • SQLite Database Builtin with syntax highlighted query editor. Database/query editor with a color coded editor and auto completion built in.
    • Profiles can be used for multiple characters (any trigger or alias can be set to only run for certain characters).
    • Directions support
    • Global variable support in and outside of LUA that persists beyond mud sessions (Avalon also has temp variable support).
    • Plugin ability (extend Avalon by writing your own C# or Lua plugins)
    • Custom scraping that can be easily turned on and off via hash commands/LUA and then piped to variables (imagine an alias that scraped notes into a database for posterity, etc.).
    • Tick support.
    • Gagging, redirection and custom information panels.
    • Line rewriting (being able to transform the text sent from the server before it’s rendered to the mud client)
    • Regular Expression tester.

    Open Source Libraries used in Avalon

    Short Term Road-map

    • Documentation
    • Ensure touch screen scrolling is smooth on all terminals and controls.

    Recognition / Thank you to testers

    Thank You

    License

    The Avalon Mud Client is being released under a modified MIT license with an additional clause requiring credit to the original author (Blake Pell). E.g. this means the license should be flexible enough to do what you need to do with it.

    Visit original content creator repository https://github.com/blakepell/AvalonMudClient
  • gpg-remote

    GPG REMOTE
    ==========
    
    
    Motivation
    ----------
    
    Using GnuPG in a networked environment always poses certain risk that a
    remote attacker who is able to compromise one of the client applications
    (e.g. MUA, IM client, etc.) could easily leak the private key by calling
    ``gpg --export-secret-keys``. A common mitigation of such risk are
    smartcards, however they are specialized hardware which a) may not be
    readily available, or b) could be not trusted for various of reasons.
    
    
    Overview
    --------
    
    GPG Remote is a client-server application allowing to delegate GnuPG
    private key operations to a remote server running in a trusted environment.
    Server filters client input according to specified rules, and runs GnuPG
    operations on behalf of a client.
    
    GPG Remote separates GnuPG execution between a front-end client and a
    back-end server. The client tries to replicate GnuPG command line
    interface, taking up command line arguments and STDIN data. Internally,
    it then parses args input, figures out files which the user may want to
    process, packs all that into a request package, and sends it to the server.
    
    The server operating in a trusted environment is tasked to execute ``gpg``
    in a safe manner. For this end it uses a whitelist of ``gpg`` command line
    options to filter out everything inappropriate of the received client
    command line arguments (especially, commands like ``--export-secret-keys``).
    Files received from the client are saved into temporary location, and their
    paths in command line arguments are updated accordingly. Finally, ``gpg``
    is called, and its output (comprised of STDERR, STDOUT, exit code, as well
    as newly generates files) is sent back to client.
    
    
    Installation
    ------------
    
    Make sure you have Python 3.3.x or later installed on all systems you plan
    to use for client and server operation. Both client and server modules are
    self-contained, and can be placed anywhere on the system.
    
    Running GPG Remote Client as a drop-in replacement for system-wide ``gpg``
    requires ``gpgremote_client.py`` script to be moved to or symlinked from
    ``/usr/bin/gpg`` path. If both components are running on the same system,
    ensure only the server user has read-write access to GnuPG keyring files.
    
    In order to enable passphrase input over a network connection, follow these
    steps:
    
    1. Make sure standard ``gpg`` ``pinentry`` application is installed on the
       client.
    2. Install [``pyassuan``](https://pypi.python.org/pypi/pyassuan/) library
       on both client and server systems.
    3. Ensure ``gpg-agent`` is properly configured and running on the server,
       and path to bundled GPG Remote ``pinentry.py`` is passed to ``gpg-agent``
       using ``--pinentry-program`` option (see ``man gpg-agent`` for details).
    
    If "panic" rules support is required (see the corresponsing section below),
    install [``pbkdf2``](https://pypi.python.org/pypi/pbkdf2) Python module on
    the server system.
    
    
    Configuration
    -------------
    
    The client reads configuration data (specifically, server listening
    host:port) from ``gpgremote_client.conf`` file located in ``~/.gnupg``
    directory unless path is overridden with ``GNUPGHOME`` environment variable.
    
    By default server reads its configuration from ``gpgremote_server.conf``
    file located in ``~/.gnupg`` (the path can be overridden with ``GNUPGHOME``
    environment variable). However, specific path can be provided with
    ``-c``/``--config`` option to server invocation. Most server parameters
    can be reconfigured from the command line as well (``-h``/``--help`` will
    print all available options).
    
    
    Whitelist
    ---------
    
    The second part of server configuration is ``gpg`` options whitelist
    defined in ``whitelist.conf`` in the same directory as server config file.
    The syntax is simple, yet configuring the whitelist correctly is critical
    to server security (see _Security considerations_ section for details).
    
    1. Lines not starting with a dash sign are ignored.
    2. A single set of options per line.
    3. A set is either a single option (in long or short form), or a
       space-separated long and short forms (in arbitrary amount and order).
    4. Non dash-prefixed words in a set (if any) have special meaning:
    
        * A bracketed word is considered a wildcard parameter turning options
          in a set into parameterized ones.
        * An unbracketed word is a whitelisted parameter value, and, as such,
          options in a set can be passed with this value only. Multiple
          whitelisted values must be provided on the same line (quoting /
          space-escaping is supported).
        * If a word is bracketed ``#NO_FILES``, it means no files should be
          expected in arguments list for this options set (see _Security
          considerations_ section below).
    
    
    One-time passwords
    ------------------
    
    An extra security measure for private key protection is _one-time passwords
    (OTP)_. When enabled, it will require user to enter a short random string
    (from a pre-generated list) along with private key passphrase. This will
    thwart adversary's attempts to use private key over GPG Remote by passing a
    sniffed passphrase. (Please note: bundled GPG Remote ``pinentry.py`` must
    be used, see _Installation_ section above for detailed requirements.)
    
    To use this feature, enable it in server configuration file or with
    ``--otp`` startup option. Then run the server with ``--gen-otp`` option and
    enter the number of one-time passwords to generate. The longer the list,
    the later it will has to be replenished, but the wider will be the window
    of opportunity for the attacker if OTP list gets compromised. Note: the
    list can be regenerated at any moment, and any passwords left in it will be
    invalidated; regenerating OTP list does not requires server restart.
    
    Once OTP is enabled, a next password from the list will be required each
    time a private key passphrase is prompted - an OTP must be appended to
    the end of passphrase (without spaces or other delimiters). An entered OTP
    is invalidated, i.e. if the passphrase is mistyped, the next OTP will be
    requested on each retry. Once OTP list is depleted, any private key
    operation will fail until the new list is generated.
    
    Please note that ``gpg-agent`` passphrase caching bypasses OTP: while the
    passphrase is cached, the key could be used without user interaction.
    
    
    "Panic" rules
    -------------
    
    It is possible to configure an arbitrary amount of so called "panic" rules.
    These rules can be used to execute specific shell commands on the server in
    the event a predefined passphrase is entered in ``pinentry`` dialog.
    (Please note: bundled GPG Remote ``pinentry.py`` must be used, see
    _Installation_ section above for detailed requirements.)
    
    Each rule is specified as an entry in the server configuration file. Entry
    name must begin with ``panic_`` prefix followed by a unique name. Entry
    value consits of a space-sperated security token and shell command(s) in
    regular notation (i.e. no quoting or escaping is necessary), or a special
    command (see below). Security token is a PBKDF2 hash string generated from
    a passphrase that should trigger a specific rule. Running server with
    ``--gen-token`` option will help to generate a token for a particular
    passphrase.
    
    A single passphrase can trigger any amount of rules if all of them use the
    same passphrase protection (but not necesserily the same token literatim).
    A triggered command is silently executed by the server-side ``pinentry``
    process with access permissions of ``gpg-agent`` parent user prior to
    resending the entered passphrase to ``gpg-agent``. Matched rules are
    executed in the order they are defined in the configuration file.
    
    The following environment variables are passed to "panic" shell commands:
    
    * ``GPG_REMOTE_PID``: PID of the GPG Remote server process.
    * ``GPG_REMOTE_KEYRINGS``: Space-separated list of paths to non-empty
      ``gpg`` keyring files.
    
    The following special commands may be used instead of shell commands in
    "panic" rule definitions. Please note that a single special command only
    can be specified for any rule:
    
    * ``STOP``: Stop GPG Remote server gracefully. Server will send the client
      a general error message, finish processing of any concurrent requests,
      clean up all the data received from the client, and exit.
    * ``KILL``: Terminate GPG Remote server immediately. Server will send
      ``SIGKILL`` signal to itself without performing any cleanup procedures.
    
    Please take into account that ``gpg-agent`` reads the private key in memory
    _before_ spawning ``pinentry``, and simply running ``rm``/``wipe`` to
    delete private keyring files will not destroy the key immediately - it is
    necessary to terminate the running GPG Remote server process (using ``STOP``
    or ``KILL`` special commands) to prevent sending ``gpg`` operation results
    back to the client. Use rules chaining (by assigning the same security
    token / passphrase to multiple rules) to run multiple commands when needed.
    
    
    Security considerations
    -----------------------
    
    Communication channel authentication/encryption is out of the scope of this
    application. The user may employ SSH or VPN tunnelling to create a trusted
    channel for client-server communication.
    
    The threat model and main attack scenario is a client-side remote attacker
    (e.g. compromised network application) exfiltrating ``gpg`` private keys.
    The server mitigates this risk by using ``gpg`` command line options
    whitelist.
    
    Note that even if keyring modifying options (e.g. ``--delete-key``,
    ``--import``) are not whitelisted, client user would still be able to add
    keys to the keyring by simply sending them to STDIN (``gpg`` processes it
    contextually). If this should be avoided, it's up to the server
    administrator to run the server as a user without write access to ``gpg``
    keyring files. Remember that default ``gpg`` keyrings can be overridden
    with ``--no-default-keyring``, ``--secret-keyring`` and ``--keyring``
    options.
    
    Another potential risk to the server is its local files exfiltration. In
    the naive case the user could ask the server to run ``gpg -o - --enarmor
    [path_to_local_file]``, and the server would happily send that file contents
    in STDOUT. In order to protect against such attacks the server makes sure
    the number of filename arguments is equal to the number of files received
    in client request package. (These complications are necessary as simply
    refusing to process requests containing server local filepaths would lead
    to information leakage about server filesystem contents.) However, it
    requires correct configuration of the server whitelist in respect to
    options parameter specification: in case an option accepts parameters,
    its set MUST include parameter wildcard/value, otherwise the server might
    become vulnerable to the described attack.
    
    Note also that a number of ``gpg`` command line options (namely,
    ``--list-keys``, ``--list-sigs``, etc.) receive arbitrary amount of
    non-file arguments. This case is supported with special ``[#NO_FILES]``
    placeholder. If such an option is provided by the client, the server strips
    out any ``-o``/``--output`` options, and prevents sending any files back to
    the client.
    
    Files received from the client (which may contain sensitive cleartext data)
    are written by the server to a temporary location. By default it is a
    system-wide temp directory (commonly, ``/tmp``), but in case this directory
    is unsafe, it can be overridden using ``TEMP`` environment variable, or
    ``--temp`` command line option for server invocation. (Note that files
    aren't written directly to tempdir, but to temporary subdirectories with
    0700 access mode, i.e. accessable only by GPG Remote server user).
    
    As neither client nor server employ any semantic analysis of command line
    arguments (i.e. does not understand the meaning of options and commands),
    the client assumes an option parameter or trailing argument named as an
    existing client local file to be a file indended for ``gpg`` processing,
    and optimistically sends it to the server. Note that client unconditionally
    writes out all files received from the server (on the assumption it has
    write access to a given path) without asking for overwrite if the same file
    exist.
    
    The client may try to cause DoS on the server by sending it excessively
    huge input(s). This scenario is addressed with server resources management
    parameters: ``size_limit``, ``threads`` and ``queue``. The first limits the
    size of client request package and, as a result, memory usage. The second
    limits the number of CPU threads used for requests processing (each request
    is single-threaded). Note that the total amount of RAM the server might use
    is around ``S * T * 2``, where ``S`` is the package size limit and ``T`` is
    the threads count (i.e. with default ``S=1GB`` and ``T=2`` maximum RAM
    usage would be 4 GB). Finally, the ``queue`` value is the amount of
    requests that can be queued for processing. If the value is higher than
    threads count then the remaining requests will wait until active ones are
    finished. Awaiting requests does not take up additional resources except
    for a socket connection.
    
    When remote passphrase input is used, an entered passphrase never touches
    long-lived server process memory. However it remains in the client memory
    for the whole duration of ``gpg`` execution, and during that period it's
    subject to a risk of memory swapping. Make sure client swapping device is
    encrypted, disabled, or other protective measures are employed.
    
    One-time passwords (OTP) is a mere access control mechanism enforced by GPG
    Remote. As such, it does not affects any cryptographic material, and must
    not be expected to deliver specific cryptographic properties, e.g. PFS.
    
    If "panic" rules are configured on the server with high hashing iterations
    count, an adversary can potentially deduce this fact from a delay of ``gpg``
    output as user passphrase must be matched to each unique security token.
    It is also possible to detect "panic" rules execution if the executed
    command takes a long time to complete.
    
    
    Technical details
    -----------------
    
    Communication protocol is a simple two-step request-response. Package
    format for both directions is as follows:
    
    ``<len_p> | <len_j> | JSON(<header>, <type>, <fields>, <files_meta>) |
    [binary]``
    
    * ``len_p`` (8 bytes): Overall package length.
    * ``len_j`` (8 bytes): JSON packet length.
    * ``header`` (list): ``auth`` token (optional) and application ``version``.
    * ``type`` (str): Package identifier.
    * ``fields`` (list): Arbitrary data fields.
    * ``files_meta`` (dict): ``File_pathname->file_len`` mapping.
    * ``binary`` (bytes): Concatenated files data (optional).
    
    If authentication token is provided, it is expected to be a HMAC-SHA256 hex
    digest of all JSON-packed metadata, and is calculated as follows: the
    metadata elements (except for ``auth``) are packed as a flat list in the
    above mentioned order into a JSON-encoded string, which is passed to HMAC
    context. Authentication is currently used for server<>``pinentry`` IPC
    only. As such, binary data is not authenticated.
    
    Remote passphrase input is implemented using custom ``pinentry.py`` shim
    application. It employs the following communication steps (``pinentry``
    actor below is the custom ``pinentry.py`` shim application unless otherwise
    stated):
    
    1. ``server>gpg-agent``: uses ``PINENTRY_USER_DATA`` environment variable
       (which is passed over ``gpg > gpg-agent > pinentry`` execution stack) to
       provide ``pinentry`` with IPC communication details including IPC socket
       and session authentication key.
    2. ``gpg-agent>pinentry``: calls ``pinentry`` with ``PINENTRY_USER_DATA``
       environment variable and initiates Assuan protocol.
    3. ``pinentry>server``: initiates IPC protocol and asks for client network
       connection data.
    4. ``server>pinentry``: sends opened client network socket directly to the
       custom ``pinentry`` over IPC channel (UNIX socket).
    5. ``pinentry>client``: uses the provided network connection to send the
       client all the required ``pinentry`` data (text strings and startup
       parameters) got from ``gpg-agent`` at step 2.
    6. ``client``: runs standard ``pinentry`` to aquire user response data (in
       the form of Assuan protocol response).
    7. ``client>pinentry``: sends user response data.
    8. ``pinentry``: executes "panic" commands if any are triggered by the
       client passphrase.
    9. ``pinentry>gpg-agent``: replays user response in Assuan protocol
       exchange.
    
    If wrong passphrase is entered, steps 2-9 are performed again up to the
    number of retries required by ``gpg-agent``.
    
    It should be noted that although IPC UNIX socket (used for
    server<>``pinentry`` communication) access is not resticted (in order to
    allow running server and ``gpg-agent`` under different users), server
    verifies authenticity of packages received using the session auth key it
    provides ``pinentry`` process with.
    
    The one-time passwords (OTP) list is stored on the server in plaintext.
    Each line of OTP list file is a colon-delimited ID number and the actual
    password string.
    
    "Panic" rules security token is a PBKDF2[SHA-1, HMAC] output with effective
    entropy limit of 256 bits. 64-bit salt value is used. Token format is a
    string of colon-delimited Base62-encoded elements: iterations count
    (in bytes representation), salt, hash.
    
    Default server listening port (29797) was produced in Python as follows:
    
    ```
    #!python
    
    int.from_bytes(b'gpgremote', 'big') % 2 ** 16
    ```
    
    (Although it has been noted only ``b'te'`` bytestring has any effect in
    such procedure.)
    
    
    Issues, limitations
    -------------------
    
    * Interactive console UI operations (e.g. key generation, key edit, etc.)
      are not supported.
    
    * Client does not support reading input from TTY, data must be piped to
      STDIN.
    
    * Passing file descriptors and implementing other forms of advanced IPC
      interaction with ``gpg`` is not supported.
    
    * No environment variables are passed from the client. If ``gpg`` must be
      invoked with specific environment (e.g. ``LANG``), start GPG Remote
      Server with all the necessary variables instead.
    
    * If GnuPG 2.x or higher is used without custom Pinentry, secret key
      operations would spawn standard Pinentry dialog on the server side which
      will prevent ``gpg`` process from terminating. This might be a feature if
      both GPG Remote server and client are running on the same system,
      otherwise it's up to the server administrator to disable ``gpg-agent``
      server-side (for example, by downgrading to GnuPG 1.4.x or starting
      ``gpg-agent`` with ``--batch`` option).
    
    
    ToDo
    ----
    
    * One-time passwords support.
    * Minimize memory footprint.
    
    
    Version history
    ---------------
    
    * 2015-03-18 - ``v1.3``
        - Added support for one-time passwords.
        - Fixed a case with pinentry and stdin pipe.
    
    * 2015-03-16 - ``v1.2``
        - Passphrase confirmation while generating "panic" security token.
        - Minor aesthetic and code documentation cleanups.
        - First stable release.
    
    * 2015-02-17 - ``v1.2b``
        - Updated minimum Python version requirement to 3.3 (it was mistakenly
          lower).
        - Raised default logging verbosity to info level.
        - Matched "panic" rules are executed in the defined order.
        - New "panic" rules security token format replacing ``crypt(3)`` one.
          Output length limit is 256 bits now instead of 192 bits.
        - Optimized security token matching scheme (speed-wise) if the same
          token is used for multiple rules.
        - Changed Server<>Pinentry IPC interface and protocol.
        - Set IPC message size limit to 64 KB (was a possible DoS scenario).
        - Special "panic" commands to properly terminate server.
        - Fixed IPC socket permissions which prevented running server and
          ``gpg-agent`` under different users.
        - Fixed error handling if ``gpg`` executable cannot be found.
        - Code cleanup and reorganization.
    
    * 2015-02-06 - ``v1.1b1``
        - Fixed 'ttyname' Assuan option update on the client side.
        - Honour PINENTRY_USER_DATA="USE_CURSES=1" environment variable.
        - Support for "panic" commands.
    
    * 2015-02-05 - ``v1.0b1``
        - Graceful server shutdown on SIGTERM.
        - Custom Pinentry to support passphrase input over a network.
        - Updated timeout defaults to make them compatible with passphrase
          input.
        - Code cleanup.
    
    * 2015-01-27 - ``v0.9b2``
        - Fixed ``--output -`` case.
        - Versioned protocol.
        - Config parser updates.
        - More unittest coverage.
        - ``README`` file updates.
    
    * 2015-01-23 - ``v0.9b1``
        - First beta release.
    
    
    License
    -------
    
    See ``COPYING``.
    
    
    Author
    ------
    
    Vlad "SATtva" Miller
    
    sattva@vladmiller.info
    
    http://vladmiller.info
    
    ``0x8443620A``
    
    

    Visit original content creator repository
    https://github.com/sattva1/gpg-remote

  • sphinx-action

    Sphinx Build Action

    Build Status Test Coverage

    This is a Github action that looks for Sphinx documentation folders in your project. It builds the documentation using Sphinx and any errors in the build process are bubbled up as Github status checks.

    The main purposes of this action are:

    • Run a CI test to ensure your documentation still builds.

    • Allow contributors to get build errors on simple doc changes inline on Github without having to install Sphinx and build locally.

    Example Screenshot

    How to use

    Create a workflow for the action, for example:

    name: "Pull Request Docs Check"
    on: 
    - pull_request
    
    jobs:
      docs:
        runs-on: ubuntu-latest
        steps:
        - uses: actions/checkout@v1
        - uses: ammaraskar/sphinx-action@master
          with:
            docs-folder: "docs/"
    • You can choose a Sphinx version by using the appropriate tag. For example, to specify Sphinx 7.0.0 you would use ammaraskar/sphinx-action@7.0.0. master currently uses Sphinx 2.4.4.

    • If you have any Python dependencies that your project needs (themes, build tools, etc) then place them in a requirements.txt file inside your docs folder.

    • If you have multiple sphinx documentation folders, please use multiple uses blocks.

    For a full example repo using this action including advanced usage, take a look at https://github.com/ammaraskar/sphinx-action-test

    Great Actions to Pair With

    Some really good actions that work well with this one are actions/upload-artifact and ad-m/github-push-action.

    You can use these to make built HTML and PDFs available as artifacts:

        - uses: actions/upload-artifact@v1
          with:
            name: DocumentationHTML
            path: docs/_build/html/

    Or to push docs changes automatically to a gh-pages branch:

    Code for your workflow

        - name: Commit documentation changes
          run: |
            git clone https://github.com/your_git/repository.git --branch gh-pages --single-branch gh-pages
            cp -r docs/_build/html/* gh-pages/
            cd gh-pages
            git config --local user.email "action@github.com"
            git config --local user.name "GitHub Action"
            git add .
            git commit -m "Update documentation" -a || true
            # The above command will fail if no changes were present, so we ignore
            # the return code.
        - name: Push changes
          uses: ad-m/github-push-action@master
          with:
            branch: gh-pages
            directory: gh-pages
            github_token: ${{ secrets.GITHUB_TOKEN }}

    For a full fledged example of this in action take a look at: https://github.com/ammaraskar/sphinx-action-test

    Advanced Usage

    If you wish to customize the command used to build the docs (defaults to make html), you can provide a build-command in the with block. For example, to invoke sphinx-build directly you can use:

        - uses: ammaraskar/sphinx-action@master
          with:
            docs-folder: "docs/"
            build-command: "sphinx-build -b html . _build"

    If there’s system level dependencies that need to be installed for your build, you can use the pre-build-command argument like so:

        - uses: ammaraskar/sphinx-action@master
          with:
            docs-folder: "docs2/"
            pre-build-command: "apt-get update -y && apt-get install -y latexmk texlive-latex-recommended texlive-latex-extra texlive-fonts-recommended"
            build-command: "make latexpdf"

    Running the tests

    python -m unittest

    Formatting

    Please use black for formatting:

    black entrypoint.py sphinx_action tests

    Visit original content creator repository https://github.com/ammaraskar/sphinx-action
  • MinecraftPlayerActivityChart

    Minecraft Player Activity Charts

    A simple Python program made to visualize the data collected by the ComputerCraft program SpyBot.lua by putting them in different types of charts.

    The collected data must be placed inside the data directory in order the be visualized.

    Log Extractor

    A secondary program made to extract data from a minecraft server logs and create a new file usable by the main program.

    The logs must be in the .gz format and they must be placed inside the data directory.

    Examples

    Daily active players example chart Play sessions example chart

    See more examples

    Custom Monthly Chart

    A simple python program originally designed to display custom data from a .csv file in a monthly stacked bar chart.

    Can now be used to display the data with different type of charts (Stacked bar / Line / Bar / Pie), but only the stacked bar chart will display all of the provided data as the other types will either loose the categories or the dates.

    The .csv file used by default is ./data/data.csv and its content must follow the following format:

    • 1st line: Chart title;Y axis title;X axis title
    • 2nd line: Starting month (MM-YYYY format)
    • Other lines Category name;Category color (#RRGGBB);Category values per month
    Visit original content creator repository https://github.com/gregoryeple/MinecraftPlayerActivityChart