Author: sjl99ux5ojzh

  • Playlists-Maker

    Playlists Maker

    Version Code Platform Contributors Licence

    Download: App Store

    Description

    Quickly tidy your music library using Playlists Maker!
    Select which songs you want to sort, then in which playlists these tracks can be added, and you’re ready.

    Filter song selection by date, or select specific playlists or collections.
    Provides music info, and a music player as well to help you decide in which playlist this song should be.

    Compatible with Apple Music.

    Preview

    iPad Screenshot

    Licence

    Copyright © 2017 Thomas NAUDET
    
    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.
    
    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.
    
    You should have received a copy of the GNU General Public License
    along with this program. If not, see http://www.gnu.org/licenses/
    

    Versions

    v1.0 · 07/09/2017

    Original publication

    Visit original content creator repository https://github.com/Tomn94/Playlists-Maker
  • computer_vision_with_tello_drone

    computer_vision_with_tello_drone

    Five Projects using Tello Drone

    >>> Click Here: YouTube link <<<

    Table of Contents

    1. Installations
    2. Project 1: Face Tracking
    3. Project 2: Body Tracking & Control
    4. Project 3: Hand Geusture & Control
    5. Project 4: Object Detection: Basics
    6. Project 5: Object Detection: YOLO

    Installations

    To install ImageAI, run the python installation instruction below in the command line:

    • Download and Install Python 3.10 or higher
    • Clone repository
      • Clone Repository: Link Click here or

        git clone https://github.com/mrsojourn/computer_vision_with_tello_drone.git 
        
      • Install requirements: Download requirements.txt file and install via the command of

        pip install -r requirements.txt
        
      • Run individual projects and ENJOY =)

    Projects 1

    Tello: Face Tracking

    Frace tracking demo video.

    >>> Click Here: YouTube link <<<

    Projects 2

    Tello: Body Tracking & Control

    Body Tracking & Control demo video.

    >>> Click Here: YouTube link <<<

    Projects 3

    Tello: Hand Gesture Control

    Hand gesture & control demo video.

    >>> Click Here: YouTube link <<<

    Projects 4

    Tello: Objection Detection Basics

    Object dections basics demo video.

    >>> Click Here: YouTube link <<<

    Projects 5

    Tello: Object Detection: YOLO

    Objection detection YOLO demo video.

    >>> Click Here: YouTube link <<<
    Visit original content creator repository https://github.com/mrsojourn/computer_vision_with_tello_drone
  • computer_vision_with_tello_drone

    computer_vision_with_tello_drone

    Five Projects using Tello Drone

    >>> Click Here: YouTube link <<<

    Table of Contents

    1. Installations
    2. Project 1: Face Tracking
    3. Project 2: Body Tracking & Control
    4. Project 3: Hand Geusture & Control
    5. Project 4: Object Detection: Basics
    6. Project 5: Object Detection: YOLO

    Installations

    To install ImageAI, run the python installation instruction below in the command line:

    • Download and Install Python 3.10 or higher
    • Clone repository
      • Clone Repository: Link Click here or

        git clone https://github.com/mrsojourn/computer_vision_with_tello_drone.git 
        
      • Install requirements: Download requirements.txt file and install via the command of

        pip install -r requirements.txt
        
      • Run individual projects and ENJOY =)

    Projects 1

    Tello: Face Tracking

    Frace tracking demo video.

    >>> Click Here: YouTube link <<<

    Projects 2

    Tello: Body Tracking & Control

    Body Tracking & Control demo video.

    >>> Click Here: YouTube link <<<

    Projects 3

    Tello: Hand Gesture Control

    Hand gesture & control demo video.

    >>> Click Here: YouTube link <<<

    Projects 4

    Tello: Objection Detection Basics

    Object dections basics demo video.

    >>> Click Here: YouTube link <<<

    Projects 5

    Tello: Object Detection: YOLO

    Objection detection YOLO demo video.

    >>> Click Here: YouTube link <<<
    Visit original content creator repository https://github.com/mrsojourn/computer_vision_with_tello_drone
  • stat405project

    NYC Crash Data Analysis

    Prerequisites

    To run the code in this repository, you will need to have the following installed:

    • RStudio
    • R
    • pdflatex (for rendering the .pdf files)

    Data Setup

    The link to the SQLite database used can be found here

    To populate your local repository with the database, download the database from the link above and move it to the data folder in the root of the repository. The database should be named nyc_crash_data.db. If the data folder does not exist, create it in the root of the repository.

    The original CSV files used to create the database were the most up-to-date data sets available at the time of the project. The data sets were downloaded from the NYC OpenData website and can be found here, here, and here.

    About the Data

    via NYC OpenData:

    “The Motor Vehicle Collisions crash table contains details on the crash event. Each row represents a crash event. The Motor Vehicle Collisions data tables contain information from all police reported motor vehicle collisions in NYC. The police report (MV104-AN) is required to be filled out for collisions where someone is injured or killed, or where there is at least $1000 worth of damage.”

    The data sets we used are:

    1. Motor Vehicle Collisions - Crashes: This data set contains information about the crashes themselves, such as the date, time, and location of the crash, as well as the number of people injured and killed.

    2. Motor Vehicle Collisions - Persons: This data set contains information about the people involved in the crashes, such as their age, their unique identifier, etc.

    3. Motor Vehicle Collisions - Vehicles: This data set contains information about the vehicles involved in the crashes, such as the vehicle type, the vehicle make, etc.

    For a more detailed breakdown of the data sets used and how they relate, please see the data dictionary located in the data folder. This file details the foreign keys and their corresponding tables, as well as the data types and descriptions of each column in the data set.

    Data Analyses

    To see our incremental data analysis, please see the reports folder. This folder contains both the .qmd files and their corresponding .pdf files of each of our report iterations.

    Live Demo of the Shiny App

    • Check out the demo of our Shiny app here.

    Running the Shiny App Locally (Optional)

    The Shiny app can be run by opening the app.R file in the in app/ directory in RStudio and clicking the “Run App” button in the top right corner of the script editor. This will open the app in a new window in your default web browser. (Note: You will have needed to have run the code in the report_final.qmd file to populate the database before running the Shiny app.)

    Contributors

    Visit original content creator repository
    https://github.com/micahkepe/stat405project

  • camera

    Camera Util

    Table of contents generated with markdown-toc

    Overview

    A collection of python threaded camera support routines for

    • USB and laptop internal webcams
    • RTSP streams
    • MIPI CSI cameras (Raspberry Pi, Jetson Nano)
    • FLIR blackfly (USB)

    Also support to save as

    • HD5
    • tiff
    • avi, mkv

    Supported OS

    • Windows
    • MacOS
    • Unix

    The routines primarily use OpenCV or PySpin to interface with the camera.
    The image acquisition runs in a background thread to achieve maximal frame rate and minimal latency.

    This work is based on efforts from Mark Omo and Craig Post.

    Requirements

    • PySpin for FLIR cameras
    • opencv for USB, CSI cameras, RTSP network streams
      • Windows uses cv2.CAP_MSMF
      • Darwin uses cv2.CAP_AVFOUNDATION
      • Linux uses cv2.CAP_V4L2
      • Jetson Nano uses cv2.CAP_GSTREAMER
      • RTSP uses cv2.CAP_GSTREAMER

    On windows GSTREAMER is not enabled by default. If you want RTSP functionaly you need to custom built opencv. See my windows installation instructions on Github.

    Installation

    camera

    1. cd "folder where you have this Readme.md file"
    2. pip install . or python setup.py bdist_wheel and pip3 install .\dist\*.whl

    opencv
    3. pip3 install opencv-contrib-python

    tiff and hd5

    • https://www.lfd.uci.edu/~gohlke/pythonlibs/#imagecodecs
    • https://www.lfd.uci.edu/~gohlke/pythonlibs/#tifffile
    • https://www.lfd.uci.edu/~gohlke/pythonlibs/#h5py

    Make sure the version matches your python installation (e.g. 3.8) and CPU architecture (e.g. 64).

    blackfly
    Spinnaker provides SDK and python bindings. The versions of those two programs need to match.

    To install the downloaded wheels, in CMD window:

    1. cd Downloads

    2. pip3 install imagecodecs....

    3. pip3 install tifffile....

    4. pip3 install h5py....

    5. pip3 install spinnaker_python...

    6. Make sure you have C:\temp directory if you use the example storage programs.

    7. To get better tiff performance, installing libtiff is advised: https://github.com/uutzinger/Windows_Install_Scripts/blob/master/Buildinglibtiff.md

    To install OpenCV on Raspi:

    1. cd ~
    2. sudo pip3 install opencv-contrib-python==4.5.3.56 as time progresses the version numnber might need to be increased.
    3. sudo pip3 install tifffile h5py platform imagecodecs

    How to create camera config files

    A. Specifications of your camera

    On Windows, the Camera utility will give you resolution options and frames per second.
    To investigate other options you can use OBS studio (or any other capture program), establish camera capture device and inspect video options.
    py -3 list_cv2CameraProperties.py will show all camera options the video system offers. When an option states -1 it likely is not available for that camera.

    B. Configuration file

    Use one of the existing camera configutrations in examples/configs or create your own.
    As first step set appropriate resolution and frames per second.
    As second step figure out the exposure and autoexposure settings.

    Run Example

    Run capture_display.py from .\examples
    You need to set the proper config file in the program. You should not need to edit python files in capture or streamer folder.

    Example Programs

    Display:
    In general display should occur in main program.
    OpenCV requires waitkey to be executed in order to update the display and limits update rate to about 50-90 fps.

    Queue:
    Data transfer between the main program and capture and storage threads.

    Examples:

    • capture_display.py tests camera capture for all capture platforms except blackfly.

    • blackfly_display.py tests the blackfly capture module, displays images and reports framerate.

    • capture_saveavi_display.py display and save to avi files

    • capture_savemkv_display.py display and save to mkv files

    • test_display.py testing of opencv display framerate, no camera, just refresh rate.

    • test_savehd5.py testing of the disk throughput with hdf5, no camera

    • test_savetiff.py testing of the disk throughput with tiff, no camera

    • test_saveavi.py testing of the disk throughput with avi, no camera, only 3 color planes per image possible

    • test_savemkv.py testing of the disk throughput with mkv/mp4v, no camera, only 3 color planes per image possible

    • test_blackfly.py tests the blackfly capture module and reports framerate, no display

    • blackfly_savehdf5.py no display but incoporates saving to disk

    • blackfly_savetiff.py no display but incoporates saving to disk

    • blackfly_savehdf5_display.py display and incoporates saving to disk

    • blackfly_savetiff_display.py display andincoporates saving to disk

    Pip upload

    py -3 setup.py check
    py -3 setup.py sdist
    py -3 setup.py bdist_wheel
    pip3 install dist/thenewpackage.whl
    twine upload dist/*
    

    Capture modules

    blackflyCapture

    Simplifies the settings needed for the Blackfly camera.
    Supports trigger out during frame exposure and trigger in for frame start.
    Optimized settings to achieve full frame rate with S BFS-U3-04S2M.

    nanoCapture

    Uses gstreamer pipeline for Jetson Nano.
    Pipline for nvidia conversion and nvarguscamera capture.
    Settings optimized for Sony IMX219 Raspi v2 Module.

    cv2Capture

    Uses the cv2 capture architecture.
    The video subsystem is choosen based on the operating system.

    rtspCapture

    gstreamer based rtsp network stream capture for all platforms.
    gstreamer is called through OpenCV.
    By default OpenCV supports ffmpeg and not gstreamer. Jetson Nano does not support ffmpeg but opencv is prebuilt with gstreamer for that platform.

    piCapture

    Interface for picamera module. Depricated since cv2Capture is more efficient for the Raspberry Pi.

    Changes

    2022 - February added libcamera capture for Raspian Bullseye
    2022 - January added queue as intialization option, updated cv2Capture
    2021 - November moved queue into class
    2021 - November added rtp server and client
    2021 - November added mkvServer, wheel installation, cleanup
    2021 - October added aviServer and multicamera example, PySpin trigger fix
    2021 - September updated PySpin trigger out polarity setting  
    2020 - Release  
    Urs Utzinger
    

    References

    realpython:
    https://realpython.com/python-concurrency/
    https://realpython.com/python-sleep/
    https://realpython.com/async-io-python/

    python for the lab:
    https://www.pythonforthelab.com/blog/handling-and-sharing-data-between-threads/

    Example Camera Peformance

    Sony IMX287 FLIR Blackfly S BFS-U3-04S2M

    • 720×540 524fps
    • auto_exposure off

    OV5647 OmniVision RasPi

    • auto_exposure 0: auto, 1:manual
    • exposure in microseconds
    • Max Resolution 2592×1944
    • YU12, (YUYV, RGB3, JPEG, H264, YVYU, VYUY, UYVY, NV12, BGR3, YV12, NV21, BGR4)
    • 320×240 90fps
    • 640×480 90fps
    • 1280×720 60fps
    • 1920×1080 6.4fps
    • 2592×1944 6.4fps

    IMX219 Sony RasPi

    • auto_exposure 0: auto, 1:manual
    • exposure in microseconds
    • Max Resolution 3280×2464
    • YU12, (YUYV, RGB3, JPEG, H264, YVYU, VYUY, UYVY, NV12, BGR4)
    • 320×240 90fps
    • 640×480 90fps
    • 1280×720 60fps
    • 1920×1080 4.4fps
    • 3280×2464 2.8fps

    ELP USB Camera RasPi

    • MJPG
    • 320×240 and 640/480, 120fps
    • auto_exposure, can not figure out out in MJPG mode
    • auto_exposure = 0 -> static exposure
    • exposure is about (exposure value / 10) in ms
    • WB_TEMP 6500

    Dell Internal USB

    • 320×240, 30fps
    • YUY2
    • autoexposure ? 0.25, 0.74 -1. 0
    • WB_TEMP 4600
    • 1280×720, 30fps
    • 620×480, 30fps
    • 960×540, 30fps

    Visit original content creator repository
    https://github.com/uutzinger/camera

  • language-comparison

    Logic Server: My own TodoMVC of server-side code

    Comparing languages by re-implementing some typical functions.

    • Text Service functions
    • GraphQL server

    Code Samples

    Here are samples of each language. Or browse the folders to also compare the tests and config files.

    Crystal

    # Plaintext and HTML manipulation.
    module TextService
      extend self
    
      #
      # Return a new string enhanced with typographic characters:
      #  Single quotes: ’
      #  Double quotes: “”
      #
      def add_typography(text : String) : String
        text.gsub(/"([^"]+)"/, "\\1”")
            .gsub('\'', '’')
      end
    
    
      #
      # Add nicer typography that HTML can provide:
      #  Fractions using superscript and subscript.
      #
      def add_html_typography(text : String) : String
        text.gsub(%r{\b(\d+)/(\d+)\b}, "<sup>\\1</sup>&frasl;<sub>\\2</sub>")
      end
    end

    Elixir

    import String
    
    
    defmodule TextService do
      @moduledoc """
      Plaintext and HTML manipulation.
      """
    
      @doc """
      Return a new string enhanced with typographic characters:
        Single quotes: ’
        Double quotes: “”
      """
      @spec add_typography(binary) :: binary
      def add_typography(text) do
        text
        |> replace(~r/\"([^\"]+)\"/, "“\\1”")
        |> replace(~r/'/, "’")
      end
    
    
      @doc """
      Add nicer typography that HTML can provide:
        Fractions using superscript and subscript.
      """
      @spec add_html_typography(binary) :: binary
      def add_html_typography(text) do
        text
        |> replace(~r/\b(\d+)\/(\d+)\b/, "<sup>\\1</sup>&frasl;<sub>\\2</sub>")
      end
    end

    Python

    """Plaintext and HTML manipulation."""
    
    import re
    
    DOUBLE_QUOTED_TEXT = re.compile(r'"([^"]+)"')       # "Hello"
    FRACTION           = re.compile(r'\b(\d+)/(\d+)\b') # 1/2
    
    
    def add_typography(text: str) -> str:
        """ Return a new string enhanced with typographic characters:
              Single quotes: ’
              Double quotes: “”
        """
        return DOUBLE_QUOTED_TEXT.sub(r'“\1”', text).replace("'", "’")
    
    
    def add_html_typography(text: str) -> str:
        """ Add nicer typography that HTML can provide:
              Fractions using superscript and subscript.
        """
        return FRACTION.sub(r'<sup>\1</sup>&frasl;<sub>\2</sub>', text)

    Rust

    /// Plaintext and HTML manipulation.
    
    use lazy_static::lazy_static;
    use regex::Regex;
    use std::borrow::Cow;
    
    lazy_static! {
        static ref DOUBLE_QUOTED_TEXT: Regex = Regex::new(r#""(?P<content>[^"]+)""#).unwrap();
        static ref FRACTION:           Regex = Regex::new(r"\b(\d+)/(\d+)\b").unwrap();
    }
    
    /// Return a new string enhanced with typographic characters:
    ///     Single quotes: ’
    ///     Double quotes: “”
    fn add_typography(text: &str) -> String {
        DOUBLE_QUOTED_TEXT
            .replace_all(text, "“$content”")
            .replace("'", "’")
    }
    
    /// Add nicer typography that HTML can provide:
    ///     Fractions using superscript and subscript.
    ///
    fn add_html_typography(text: &str) -> Cow<str> {
        FRACTION.replace_all(text, r"<sup>$1</sup>&frasl;<sub>$2</sub>")
    }

    Swift

    import Foundation
    
    extension String {
        /// Provide a higher-level API for regexes.
        func gsub(_ regex: NSRegularExpression, _ replacement: String) -> String {
            return regex.stringByReplacingMatches(
                in: self,
                range: NSRange(location: 0, length: self.utf16.count),
                withTemplate: replacement
            )
        }
    }
    
    
    let SINGLE_QUOTE =  try! NSRegularExpression(pattern: "'")
    let DOUBLE_QUOTES = try! NSRegularExpression(pattern: #""([^"]+)""#)
    let FRACTION =      try! NSRegularExpression(pattern: #"\b(\d+)/(\d+)\b"#)
    
    
    /// Return a new String enhanced with typographic characters:
    ///   Single quotes: ’
    ///   Double quotes: “ ”
    func addTypography(text: String) -> String {
        return text
            .gsub(SINGLE_QUOTE,  "")
            .gsub(DOUBLE_QUOTES, "“$1”")
    }
    
    
    /// Add nicer typography that HTML can provide:
    ///   Fractions using superscript and subscript.
    func addHtmlTypography(text: String) -> String {
        return text.gsub(FRACTION, #"<sup>\1</sup>&frasl;<sub>\2</sub>"#)
    }

    Visit original content creator repository
    https://github.com/dogweather/language-comparison

  • datakit-miniprogram

    微信小程序DataFlux RUM 数据采集SDK

    通过引入sdk文件,监控小程序性能指标,错误log,以及资源请求情况数据,上报到DataFlux 平台datakit

    使用方法

    在小程序的app.js文件以如下方式引入代码

    npm 引入(可参考微信官方npm引入方式)

    const { datafluxRum } = require('@cloudcare/rum-miniapp')
    // 初始化 Rum
    datafluxRum.init({
    	datakitOrigin: 'https://datakit.xxx.com/',// 必填,Datakit域名地址 需要在微信小程序管理后台加上域名白名单
    	applicationId: 'appid_xxxxxxx', // 必填,dataflux 平台生成的应用ID
    	env: 'testing', // 选填,小程序的环境
        version: '1.0.0' // 选填,小程序版本
    })

    CDN 下载文件本地方式引入(下载地址)

    const { datafluxRum } = require('./lib/dataflux-rum-miniapp.js')
    // 初始化 Rum
    datafluxRum.init({
    	datakitOrigin: 'https://datakit.xxx.com/',// 必填,Datakit域名地址 需要在微信小程序管理后台加上域名白名单
    	applicationId: 'appid_xxxxxxx', // 必填,dataflux 平台生成的应用ID
    	env: 'testing', // 选填,小程序的环境
      version: '1.0.0' // 选填,小程序版本
    })

    配置

    初始化参数

    参数 类型 是否必须 默认值 描述
    applicationId String 从 dataflux 创建的应用 ID
    datakitOrigin String datakit 数据上报 Origin;注意:需要在小程序管理后台加上request白名单
    env String 小程序 应用当前环境, 如 prod:线上环境;gray:灰度环境;pre:预发布环境 common:日常环境;local:本地环境;
    version String 小程序 应用的版本号
    sampleRate Number 100 指标数据收集百分比: 100表示全收集,0表示不收集
    traceType $\color{#FF0000}{新增}$ Enum ddtrace 与 APM 采集工具连接的请求header类型,目前兼容的类型包括:ddtracezipkinskywalking_v3jaegerzipkin_single_headerw3c_traceparent注: opentelemetry 支持 zipkin_single_header,w3c_traceparent,zipkin三种类型
    traceId128Bit $\color{#FF0000}{新增}$ Boolean false 是否以128位的方式生成 traceID,与traceType 对应,目前支持类型 zipkinjaeger
    allowedTracingOrigins $\color{#FF0000}{新增}$ Array [] 允许注入 trace 采集器所需header头部的所有请求列表。可以是请求的origin,也可以是是正则,origin: 协议(包括://),域名(或IP地址)[和端口号] 例如:["https://api.example.com", /https:\/\/.*\.my-api-domain\.com/]
    trackInteractions Boolean false 是否开启用户行为采集

    注意事项

    1. datakitOrigin 所对应的datakit域名必须在小程序管理后台加上request白名单
    2. 因为目前微信小程序请求资源APIwx.requestwx.downloadFile返回数据中profile字段目前ios系统不支持返回,所以会导致收集的资源信息中和timing相关的数据收集不全。目前暂无解决方案,request, downloadFile ;API支持情况
    3. trackInteractions 用户行为采集开启后,因为微信小程序的限制,无法采集到控件的内容和结构数据,所以在小程序 SDK 里面我们采取的是声明式编程,通过在 wxml 文件里面设置 data-name 属性,可以给 交互元素 添加名称,方便后续统计是定位操作记录, 例如:
     <button bindtap="bindSetData" data-name="setData">setData</button>

    Visit original content creator repository
    https://github.com/GuanceCloud/datakit-miniprogram

  • Pytorch-Implemented-Deep-SR-ITM

    Pytorch Implemented Deep-SR-ITM

    A Pytorch implemented Deep SR-ITM (ICCV2019 oral)

    Soo Ye Kim, Jihyong Oh, Munchurl Kim. Deep SR-ITM: Joint Learning of Super-Resolution and Inverse Tone-Mapping for 4K UHD HDR Applications. IEEE International Conference on Computer Vision, 2019.

    Note: The code is completely adapted from https://github.com/sooyekim/Deep-SR-ITM but rewritten in pytorch format. This repository is NOT aimed to improve the baseline, but to retain the original settings in a different implementation. If you have any questions for the details of the implementations, please refer to the original repo.

    Test Environment

    • Ubuntu 16.04 LTS
    • python 3.7.5
    • pytorch 1.3.1
    • torchvision 0.4.2
    • CUDA 10.1
    • opencv 3.4.2
    • numpy 1.17.4

    Data Preparation

    1. Download training and testing data from https://github.com/sooyekim/Deep-SR-ITM
    2. Use Matlab to transform the data with extension, .mat, into '.png' form (No matter SDR or HDR images)
    3. Prepare the data as following…

    ${DATA_ROOT}
    ├── trainset_SDR
    │   ├── 000001.png
    │   ├── 000002.png
    │   ├── ...
    │   └── 039840.png
    ├── trainset_HDR
    │   ├── 000001.png
    │   ├── 000002.png
    │   ├── ...
    │   └── 039840.png
    ├── testset_SDR
    │   ├── 000001.png
    │   ├── 000002.png
    │   ├── ...
    │   └── 000028.png
    └── testset_SDR
        ├── 000001.png
        ├── 000002.png
        ├── ...
        └── 000028.png
    

    Prepare Environment

    # Prepare CUDA Installation
    ...
    
    # git clone repository
    git clone https://github.com/greatwallet/Pytorch-Implemented-Deep-SR-ITM.git
    cd Pytorch-Implemented-Deep-SR-ITM
    
    # create conda environment
    conda create --n env-sr-itm python=3.7 -y
    conda activate env-sr-itm
    conda install -c pytorch pytorch -y
    conda install -c pytorch torchvision -y
    conda install -c conda-forge opencv -y
    conda install numpy -y
    
    # set soft link to data path
    ln -s ${DATA_ROOT} ./
    

    Usage

    The default parameters in the scripts is set strictly according to the original repo. However, please modify the parameters in the script if you would like.

    Train

    python train_base_net.py
    python train_full_net.py
    

    Test

    Download pretrained checkpoints from here.

    Please specify the path of the testset and other settings in the test.py.

    python test.py
    

    Note: The difference between the val and test phase in YouTubeDataset.__init__ would be that:

    • val: the SDR and HDR images must be both be provided, and the size of SDR image must be IDENTICAL with HDR images, and the YouTubeDataset will resize the SDR images for the net later on.
    • test: the HDR images may or may not be provided to the dataset. If provided, the size of SDR image should be k times smaller than HDR images (assuming k is the parameter scale of networks)

    Acknowledgement

    SSIM and MS-SSIM functions are borrowed from https://github.com/VainF/pytorch-msssim

    Contact

    Please contact me via email (cxt_tsinghua@126.com) for any problems regarding the released code.

    Visit original content creator repository
    https://github.com/greatwallet/Pytorch-Implemented-Deep-SR-ITM

  • cfor

    cfor

    cfor

    cfor (command for) is an AI-powered terminal assistant that helps you find and execute commands without digging through man pages. Simply ask what you want to do in natural language, and cfor will suggest relevant commands with brief explanations.

    The name reflects its usage pattern: cfor [what you want to do] is like asking “what’s the command for [task]?” – making it intuitive to use for finding the right commands for your tasks.

    Features

    • Natural Language Queries: Ask for commands in plain English
    • Smart Command Suggestions: Get multiple command variations with inline comments
    • Interactive Selection: Choose the right command from a list of suggestions
    • Terminal Integration: Selected commands are automatically inserted into your terminal prompt
    • OpenAI Integration: Powered by OpenAI’s language models (supports multiple models)

    Installation

    Using Homebrew (macOS and Linux)

    brew install cowboy-bebug/tap/cfor

    From Source

    Requirements:

    • Go 1.24 or later
    git clone https://github.com/cowboy-bebug/cfor.git && cd cfor
    make install

    Usage

    cfor [question]

    Examples

    cfor "listing directories with timestamps"
    cfor "installing a new package for a pnpm workspace"
    cfor "applying terraform changes to a specific resource"
    cfor "running tests in a go project"

    Configuration

    cfor requires an OpenAI API key to function. You can set it up in one of two ways:

    # Use a general OpenAI API key
    export OPENAI_API_KEY="sk-..."
    
    # Or use a dedicated key for cfor (takes precedence)
    export CFOR_OPENAI_API_KEY="sk-..."

    Model Selection

    By default, cfor uses gpt-4o. You can switch to other supported models:

    export CFOR_OPENAI_MODEL="gpt-4o"

    Building from Source

    make build    # Build the binary
    make install  # Install to your GOPATH
    make clean    # Clean build artifacts

    Supported Platforms

    • Linux (amd64, arm64)
    • macOS (amd64, arm64)

    Contributing

    Contributions are welcome! Feel free to open issues or submit pull requests.

    License

    MIT

    Visit original content creator repository https://github.com/cowboy-bebug/cfor
  • cfor

    cfor

    cfor

    cfor (command for) is an AI-powered terminal assistant that helps you find and execute commands without digging through man pages. Simply ask what you want to do in natural language, and cfor will suggest relevant commands with brief explanations.

    The name reflects its usage pattern: cfor [what you want to do] is like asking “what’s the command for [task]?” – making it intuitive to use for finding the right commands for your tasks.

    Features

    • Natural Language Queries: Ask for commands in plain English
    • Smart Command Suggestions: Get multiple command variations with inline comments
    • Interactive Selection: Choose the right command from a list of suggestions
    • Terminal Integration: Selected commands are automatically inserted into your terminal prompt
    • OpenAI Integration: Powered by OpenAI’s language models (supports multiple models)

    Installation

    Using Homebrew (macOS and Linux)

    brew install cowboy-bebug/tap/cfor

    From Source

    Requirements:

    • Go 1.24 or later
    git clone https://github.com/cowboy-bebug/cfor.git && cd cfor
    make install

    Usage

    cfor [question]

    Examples

    cfor "listing directories with timestamps"
    cfor "installing a new package for a pnpm workspace"
    cfor "applying terraform changes to a specific resource"
    cfor "running tests in a go project"

    Configuration

    cfor requires an OpenAI API key to function. You can set it up in one of two ways:

    # Use a general OpenAI API key
    export OPENAI_API_KEY="sk-..."
    
    # Or use a dedicated key for cfor (takes precedence)
    export CFOR_OPENAI_API_KEY="sk-..."

    Model Selection

    By default, cfor uses gpt-4o. You can switch to other supported models:

    export CFOR_OPENAI_MODEL="gpt-4o"

    Building from Source

    make build    # Build the binary
    make install  # Install to your GOPATH
    make clean    # Clean build artifacts

    Supported Platforms

    • Linux (amd64, arm64)
    • macOS (amd64, arm64)

    Contributing

    Contributions are welcome! Feel free to open issues or submit pull requests.

    License

    MIT

    Visit original content creator repository https://github.com/cowboy-bebug/cfor