## Designing L and T antenna tuners for HF on the Smith chart

The Smith chart is a tool used a lot by professional RF engineers for solving transmission line stub matching problems and all sorts of quick calculations.

The Smith chart can also be used for quick back of the envelope L and T antenna tuner engineering calculations.

I have on the picture above plotted a T configuration antenna tuner with the first capacitor set to a so big value that it is shorted as seen by the RF voltage (large C – low |Z|). Then the configuration becomes a L tuner in practice with a shunt L followed by a series C when seen from the load in towards the generator.

I measured the Z in the shack end of the ladder line feeding my doublet antenna to be Z = (24.1 – j35) ohms at 14.200 MHz by a Vector Network analyzer. That can be plotted as a point in the lower part of the Smith chart (capacitive Z).

(1) Since we have now first an inductor (in the tuner to ground) as observed from the load towards the generator, we can use this inductance to move along a constant Conductance curve in the Y plane (upwards in the Z plane). The conductance is constant but the Susceptance varies. (We remember from the RF engineering classes at engineering school that Y = 1/Z – of course).

(2) Then we use a series capacitor to move down inside the 1.25:1 SWR circle. We dont have to hit the center because anything inside the inner 1.25:1 circle is good enough. (We move while the R part of R + jX is constant, while the X part is changing to become more negative. This means we move on a constant resistance circle in the Z plane).

Determination of component values can be done easily by hand in a tool like this while still retaining an intuitive understanding of what is going on.

Black magic! Especially with a digital smith Chart tool.

## Why you should choose a lowpass configuration on your L antenna tuner

K6JCA has analyzed the needed components values for matching a load while moving on a constant reflection coefficient circle. The plot below shows that in case you select the high-pass configuration for your tuner, certain angles of the reflection coefficient will give you skyrocketing component values.

Component values for the highpass and lowpass configurations

Above you can see that the LsCp &CpLs configuration keeps the max component values quite flat. LsCp and CpLS  are therefore the best engineering choices based on cost and realistic component values.
http://k6jca.blogspot.no/2015/04/notes-on-antenna-tuners-t-network-part-1.html

## Repeaterlist for Oslo area, Norway (repeaterliste Oslo området)

I have made a repeaterlist for the Oslo area in Norway. This list can be saved as CSV from Excel and is then after saving as CSV compatible with Chirp.

Make sure to check that the CSV files uses commas (,) as separators and NOT semicolon (;). This can be set in the meny you see below. Goto Start menu and type locale. Click the change date, time, number format.

Thanks to Alf LA2NTA for the data I used in this list. See another posting for the direct link to Alf’s list that is located in the cloud. My list here will not be frequently updated, but per April 2017 it should give a good starting point.

repeaterliste_eksport_XLS_XLS_sortert

## How to do export and import of Excel files with CSV firmat in Chirp

• Connect chirp to the radio
• Export from Chirp to a CSV file
• Study this CSV file and use it as a template
• Copy paste frequency and ctcss and offset info ++ from ANOTHER XLS file to the CSV file you made above!
• Save as CSV from Excel (this is now your frequency list to import to the radio)
• Check with Notepad ++ that there is COMMA (not semicolon) separation saved when you save as CSV from Excel. If this is not the case, go to regional settings / additional and set list separator as , (as opposed to ;). Start and stop Exel and a reboot wont hurt to make the settings take effect
• Now you import this CSV list into Chirp
• If you get error message that Chirp cant convert floating point in the tone settings, make sure to format EVERYTHING in your Excel sheet as GENERAL (not text, not number)

## MD-390 DMR UHF radio. Modified firmware.

I updated my MD-390 with the modified firmware. This fw. gives among others, these new functions:

• Promiscuous mode. Listen to all voice groups on one timeslot. Even if you havent programmed the VG in a channel.
• See last heard on the display on all VGs on the time slot you have assigned to the channel regardless if you have enabled that VG
• VU meter for TX audio
• Updated contact list database
• Display dimming timer and intensity adjustable
• Lower beep audio on transmit not allowed etc
• List of last heard
• Channel info on display: tx freq, rx freq, timeslot, TG, sender ID, repeater ID etc

## Norwegian repeaterlist / Norsk repeaterliste (unofficial / uoffisiell)

Alf LA2NTA has made an excellent repeaterlist covering norwegian repeater that are updated quite frequently.

## Tytera TYT-390 GPS setup

To set up GPS on the TYT-390, make an own contact with name GPS and call ID 5057

Then set up Destination ID: GPS under GPS settings. Set the interval to 60s or more. (Not too often as GPS packets take repeater capacity)

Now select “GPS system 1” under the channels you want GPS enabled on. I have made a set of channels with GPS on and a set of channels with GPS off.

In Brandmeister dashboard, go to Services/Self Care and select chinese radio. Check that your call and your name looks OK (you need an account at Brandmeister).

Set your radio outside for several minutes to achieve GPS lock (can take quite some time).

You should see a globe symbol show up without the red ring (red ring means no GPS lock).

Then you can check aprs.fi for your callsign.

## Deep learning and artificial papers, courses and contact persons

A list of resources related to deep learning and artificial intelligence:

### Free Online deep learning Books

1. Deep Learning by Yoshua Bengio, Ian Goodfellow and Aaron Courville (05/07/2015)
2. Neural Networks and Deep Learning by Michael Nielsen (Dec 2014)
3. Deep Learning by Microsoft Research (2013)
4. Deep Learning Tutorial by LISA lab, University of Montreal (Jan 6 2015)
5. neuraltalk by Andrej Karpathy : numpy-based RNN/LSTM implementation
6. An introduction to genetic algorithms
7. Artificial Intelligence: A Modern Approach
8. Deep Learning in Neural Networks: An Overview

### Courses in machine learning

1. Machine Learning – Stanford by Andrew Ng in Coursera (2010-2014)
2. Machine Learning – Caltech by Yaser Abu-Mostafa (2012-2014)
3. Machine Learning – Carnegie Mellon by Tom Mitchell (Spring 2011)
4. Neural Networks for Machine Learning by Geoffrey Hinton in Coursera (2012)
5. Neural networks class by Hugo Larochelle from Université de Sherbrooke (2013)
6. Deep Learning Course by CILVR lab @ NYU (2014)
7. A.I – Berkeley by Dan Klein and Pieter Abbeel (2013)
8. A.I – MIT by Patrick Henry Winston (2010)
9. Vision and learning – computers and brainsby Shimon Ullman, Tomaso Poggio, Ethan Meyers @ MIT (2013)
10. Convolutional Neural Networks for Visual Recognition – Stanford by Fei-Fei Li, Andrej Karpathy (2015)
11. Convolutional Neural Networks for Visual Recognition – Stanford by Fei-Fei Li, Andrej Karpathy (2016)
12. Deep Learning for Natural Language Processing – Stanford
13. Neural Networks – usherbrooke
14. Machine Learning – Oxford (2014-2015)
15. Deep Learning – Nvidia (2015)
16. Graduate Summer School: Deep Learning, Feature Learning by Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Andrew Ng, Nando de Freitas and several others @ IPAM, UCLA (2012)
17. Deep Learning – Udacity/Google by Vincent Vanhoucke and Arpan Chakraborty (2016)
18. Deep Learning – UWaterloo by Prof. Ali Ghodsi at University of Waterloo (2015)
19. Statistical Machine Learning – CMU by Prof. Larry Wasserman
20. Deep Learning Course by Yann LeCun (2016)
21. Bay area DL school by Andrew Ng, Yoshua Bengio, Samy Bengio, Andrej Karpathy, Richard Socher, Hugo Larochelle and many others @ Stanford, CA (2016) 20.Designing, Visualizing and Understanding Deep Neural Networks-UC Berkeley
22. UVA Deep Learning Course MSc in Artificial Intelligence for the University of Amsterdam.

### Videos and Lectures in machine learning

1. How To Create A Mind By Ray Kurzweil
2. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng
3. Recent Developments in Deep Learning By Geoff Hinton
4. The Unreasonable Effectiveness of Deep Learning by Yann LeCun
5. Deep Learning of Representations by Yoshua bengio
6. Principles of Hierarchical Temporal Memoryby Jeff Hawkins
7. Machine Learning Discussion Group – Deep Learning w/ Stanford AI Lab by Adam Coates
8. Making Sense of the World with Deep Learning By Adam Coates
9. Demystifying Unsupervised Feature LearningBy Adam Coates
10. Visual Perception with Deep Learning By Yann LeCun
11. The Next Generation of Neural Networks By Geoffrey Hinton at GoogleTechTalks
12. The wonderful and terrifying implications of computers that can learn By Jeremy Howard at TEDxBrussels
13. Unsupervised Deep Learning – Stanford by Andrew Ng in Stanford (2011)
14. Natural Language Processing By Chris Manning in Stanford
15. A beginners Guide to Deep Neural NetworksBy Natalie Hammel and Lorraine Yurshansky
16. Deep Learning: Intelligence from Big Data by Steve Jurvetson (and panel) at VLAB in Stanford.
17. Introduction to Artificial Neural Networks and Deep Learning by Leo Isikdogan at Motorola Mobility HQ

## Researchers

### Datasets

1. MNIST Handwritten digits
2. Google House Numbers from street view
3. CIFAR-10 and CIFAR-100
4. IMAGENET
5. Tiny Images 80 Million tiny images6.
6. Flickr Data 100 Million Yahoo dataset
7. Berkeley Segmentation Dataset 500
8. UC Irvine Machine Learning Repository
9. Flickr 8k
10. Flickr 30k
11. Microsoft COCO
12. VQA
13. Image QA
14. AT&T Laboratories Cambridge face database
15. AVHRR Pathfinder
16. Air Freight – The Air Freight data set is a ray-traced image sequence along with ground truth segmentation based on textural characteristics. (455 images + GT, each 160×120 pixels). (Formats: PNG)
17. Amsterdam Library of Object Images – ALOI is a color image collection of one-thousand small objects, recorded for scientific purposes. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. (Formats: png)
18. Annotated face, hand, cardiac & meat images – Most images & annotations are supplemented by various ASM/AAM analyses using the AAM-API. (Formats: bmp,asf)
19. Image Analysis and Computer Graphics
20. Brown University Stimuli – A variety of datasets including geons, objects, and “greebles”. Good for testing recognition algorithms. (Formats: pict)
21. CAVIAR video sequences of mall and public space behavior – 90K video frames in 90 sequences of various human activities, with XML ground truth of detection and behavior classification (Formats: MPEG2 & JPEG)
22. Machine Vision Unit
23. CCITT Fax standard images – 8 images (Formats: gif)
24. CMU CIL’s Stereo Data with Ground Truth – 3 sets of 11 images, including color tiff images with spectroradiometry (Formats: gif, tiff)
25. CMU PIE Database – A database of 41,368 face images of 68 people captured under 13 poses, 43 illuminations conditions, and with 4 different expressions.
26. CMU VASC Image Database – Images, sequences, stereo pairs (thousands of images) (Formats: Sun Rasterimage)
27. Caltech Image Database – about 20 images – mostly top-down views of small objects and toys. (Formats: GIF)
28. Columbia-Utrecht Reflectance and Texture Database – Texture and reflectance measurements for over 60 samples of 3D texture, observed with over 200 different combinations of viewing and illumination directions. (Formats: bmp)
29. Computational Colour Constancy Data – A dataset oriented towards computational color constancy, but useful for computer vision in general. It includes synthetic data, camera sensor data, and over 700 images. (Formats: tiff)
30. Computational Vision Lab
31. Content-based image retrieval database – 11 sets of color images for testing algorithms for content-based retrieval. Most sets have a description file with names of objects in each image. (Formats: jpg)
32. Efficient Content-based Retrieval Group
33. Densely Sampled View Spheres – Densely sampled view spheres – upper half of the view sphere of two toy objects with 2500 images each. (Formats: tiff)
34. Computer Science VII (Graphical Systems)
35. Digital Embryos – Digital embryos are novel objects which may be used to develop and test object recognition systems. They have an organic appearance. (Formats: various formats are available on request)
36. Univerity of Minnesota Vision Lab
37. El Salvador Atlas of Gastrointestinal VideoEndoscopy – Images and Videos of his-res of studies taken from Gastrointestinal Video endoscopy. (Formats: jpg, mpg, gif)
38. FG-NET Facial Aging Database – Database contains 1002 face images showing subjects at different ages. (Formats: jpg)
39. FVC2000 Fingerprint Databases – FVC2000 is the First International Competition for Fingerprint Verification Algorithms. Four fingerprint databases constitute the FVC2000 benchmark (3520 fingerprints in all).
40. Biometric Systems Lab – University of Bologna
41. Face and Gesture images and image sequences – Several image datasets of faces and gestures that are ground truth annotated for benchmarking
42. German Fingerspelling Database – The database contains 35 gestures and consists of 1400 image sequences that contain gestures of 20 different persons recorded under non-uniform daylight lighting conditions. (Formats: mpg,jpg)
43. Language Processing and Pattern Recognition
44. Groningen Natural Image Database – 4000+ 1536×1024 (16 bit) calibrated outdoor images (Formats: homebrew)
45. ICG Testhouse sequence – 2 turntable sequences from ifferent viewing heights, 36 images each, resolution 1000×750, color (Formats: PPM)
46. Institute of Computer Graphics and Vision
47. IEN Image Library – 1000+ images, mostly outdoor sequences (Formats: raw, ppm)
48. INRIA’s Syntim images database – 15 color image of simple objects (Formats: gif)
49. INRIA
50. INRIA’s Syntim stereo databases – 34 calibrated color stereo pairs (Formats: gif)
51. Image Analysis Laboratory – Images obtained from a variety of imaging modalities — raw CFA images, range images and a host of “medical images”. (Formats: homebrew)
52. Image Analysis Laboratory
53. Image Database – An image database including some textures
54. JAFFE Facial Expression Image Database – The JAFFE database consists of 213 images of Japanese female subjects posing 6 basic facial expressions as well as a neutral pose. Ratings on emotion adjectives are also available, free of charge, for research purposes. (Formats: TIFF Grayscale images.)
55. ATR Research, Kyoto, Japan
56. JISCT Stereo Evaluation – 44 image pairs. These data have been used in an evaluation of stereo analysis, as described in the April 1993 ARPA Image Understanding Workshop paper “The JISCT Stereo Evaluation” by R.C.Bolles, H.H.Baker, and M.J.Hannah, 263–274 (Formats: SSI)
57. MIT Vision Texture – Image archive (100+ images) (Formats: ppm)
58. MIT face images and more – hundreds of images (Formats: homebrew)
59. Machine Vision – Images from the textbook by Jain, Kasturi, Schunck (20+ images) (Formats: GIF TIFF)
60. Mammography Image Databases – 100 or more images of mammograms with ground truth. Additional images available by request, and links to several other mammography databases are provided. (Formats: homebrew)
61. ftp://ftp.cps.msu.edu/pub/prip – many images (Formats: unknown)
62. Middlebury Stereo Data Sets with Ground Truth – Six multi-frame stereo data sets of scenes containing planar regions. Each data set contains 9 color images and subpixel-accuracy ground-truth data. (Formats: ppm)
63. Middlebury Stereo Vision Research Page – Middlebury College
64. Modis Airborne simulator, Gallery and data set – High Altitude Imagery from around the world for environmental modeling in support of NASA EOS program (Formats: JPG and HDF)
65. NIST Fingerprint and handwriting – datasets – thousands of images (Formats: unknown)
66. NIST Fingerprint data – compressed multipart uuencoded tar file
67. NLM HyperDoc Visible Human Project – Color, CAT and MRI image samples – over 30 images (Formats: jpeg)
68. National Design Repository – Over 55,000 3D CAD and solid models of (mostly) mechanical/machined engineerign designs. (Formats: gif,vrml,wrl,stp,sat)
69. Geometric & Intelligent Computing Laboratory
70. OSU (MSU) 3D Object Model Database – several sets of 3D object models collected over several years to use in object recognition research (Formats: homebrew, vrml)
71. OSU (MSU/WSU) Range Image Database – Hundreds of real and synthetic images (Formats: gif, homebrew)
72. OSU/SAMPL Database: Range Images, 3D Models, Stills, Motion Sequences – Over 1000 range images, 3D object models, still images and motion sequences (Formats: gif, ppm, vrml, homebrew)
73. Signal Analysis and Machine Perception Laboratory
74. Otago Optical Flow Evaluation Sequences – Synthetic and real sequences with machine-readable ground truth optical flow fields, plus tools to generate ground truth for new sequences. (Formats: ppm,tif,homebrew)
75. Vision Research Group
76. ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/ – Real and synthetic image sequences used for testing a Particle Image Velocimetry application. These images may be used for the test of optical flow and image matching algorithms. (Formats: pgm (raw))
77. LIMSI-CNRS/CHM/IMM/vision
78. LIMSI-CNRS
79. Photometric 3D Surface Texture Database – This is the first 3D texture database which provides both full real surface rotations and registered photometric stereo data (30 textures, 1680 images). (Formats: TIFF)
80. SEQUENCES FOR OPTICAL FLOW ANALYSIS (SOFA) – 9 synthetic sequences designed for testing motion analysis applications, including full ground truth of motion and camera parameters. (Formats: gif)
81. Computer Vision Group
82. Sequences for Flow Based Reconstruction – synthetic sequence for testing structure from motion algorithms (Formats: pgm)
83. Stereo Images with Ground Truth Disparity and Occlusion – a small set of synthetic images of a hallway with varying amounts of noise added. Use these images to benchmark your stereo algorithm. (Formats: raw, viff (khoros), or tiff)
84. Stuttgart Range Image Database – A collection of synthetic range images taken from high-resolution polygonal models available on the web (Formats: homebrew)
85. Department Image Understanding
86. The AR Face Database – Contains over 4,000 color images corresponding to 126 people’s faces (70 men and 56 women). Frontal views with variations in facial expressions, illumination, and occlusions. (Formats: RAW (RGB 24-bit))
87. Purdue Robot Vision Lab
88. The MIT-CSAIL Database of Objects and Scenes – Database for testing multiclass object detection and scene recognition algorithms. Over 72,000 images with 2873 annotated frames. More than 50 annotated object classes. (Formats: jpg)
89. The RVL SPEC-DB (SPECularity DataBase) – A collection of over 300 real images of 100 objects taken under three different illuminaiton conditions (Diffuse/Ambient/Directed). — Use these images to test algorithms for detecting and compensating specular highlights in color images. (Formats: TIFF )
90. Robot Vision Laboratory
91. The Xm2vts database – The XM2VTSDB contains four digital recordings of 295 people taken over a period of four months. This database contains both image and video data of faces.
92. Centre for Vision, Speech and Signal Processing
93. Traffic Image Sequences and ‘Marbled Block’ Sequence – thousands of frames of digitized traffic image sequences as well as the ‘Marbled Block’ sequence (grayscale images) (Formats: GIF)
94. IAKS/KOGS
95. U Bern Face images – hundreds of images (Formats: Sun rasterfile)
96. U Michigan textures (Formats: compressed raw)
97. U Oulu wood and knots database – Includes classifications – 1000+ color images (Formats: ppm)
98. UCID – an Uncompressed Colour Image Database – a benchmark database for image retrieval with predefined ground truth. (Formats: tiff)
99. UMass Vision Image Archive – Large image database with aerial, space, stereo, medical images and more. (Formats: homebrew)
100. UNC’s 3D image database – many images (Formats: GIF)
101. USF Range Image Data with Segmentation Ground Truth – 80 image sets (Formats: Sun rasterimage)
102. University of Oulu Physics-based Face Database – contains color images of faces under different illuminants and camera calibration conditions as well as skin spectral reflectance measurements of each person.
103. Machine Vision and Media Processing Unit
104. University of Oulu Texture Database – Database of 320 surface textures, each captured under three illuminants, six spatial resolutions and nine rotation angles. A set of test suites is also provided so that texture segmentation, classification, and retrieval algorithms can be tested in a standard manner. (Formats: bmp, ras, xv)
105. Machine Vision Group
106. Usenix face database – Thousands of face images from many different sites (circa 994)
107. View Sphere Database – Images of 8 objects seen from many different view points. The view sphere is sampled using a geodesic with 172 images/sphere. Two sets for training and testing are available. (Formats: ppm)
108. PRIMA, GRAVIR
109. Vision-list Imagery Archive – Many images, many formats
110. Wiry Object Recognition Database – Thousands of images of a cart, ladder, stool, bicycle, chairs, and cluttered scenes with ground truth labelings of edges and regions. (Formats: jpg)
111. 3D Vision Group
112. Yale Face Database – 165 images (15 individuals) with different lighting, expression, and occlusion configurations.
113. Yale Face Database B – 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions). (Formats: PGM)
114. Center for Computational Vision and Control
115. DeepMind QA Corpus – Textual QA corpus from CNN and DailyMail. More than 300K documents in total. Paper for reference.
116. YouTube-8M Dataset – YouTube-8M is a large-scale labeled video dataset that consists of 8 million YouTube video IDs and associated labels from a diverse vocabulary of 4800 visual entities.
117. Open Images dataset – Open Images is a dataset of ~9 million URLs to images that have been annotated with labels spanning over 6000 categories.

## CAT / RS232 / CW interface (FTDI, MAX232, 5V controllable FETs)

I made a CAT and CW interface from readily available components. Of course it is possible to buy a microham interface or simular. However, I wanted no USB soundcard functionality and Xp, Win 10, Linux compatibility without having to install drivers. A native FTDI chip from SF will do the job nicely. The FTDI chip has 5V TTL out and the true RS232 has +-1 12V signal levels. Therefore I pressed a MAX232 board that I designed approx 10 years ago and was laying in the junkobox into service. 5V TTL into and out from the MAX232 chip into the FTDI chip.
I used some “5V TTL FETs” for switching the CW signal. The RTS / CTS signals from the FTDI interface can drive these fets directly. I needed to invert the signal so that the CW keying is off by default. So since I had two of the 5V TTL FETs laying in the junkbox, I used one to invert and one to drive the CW key output. On the picture to the left the fets can be seen.
Here the quick and dirty prototype can be seen. The old RS232 converter board to the left, the switching FETs to the right and the FTDI board below.