#!/bin/bash
echo hello $1 $2
on command line :./startup.sh dinesh donthaoutput:hello dinesh dontha#!/bin/bash
echo hello $1 $2
on command line :./startup.sh dinesh donthaoutput:hello dinesh donthatail -100f catalina.out
let's say we want to run the latest current running logs with last 100 lines, then you should use the above command.
cd <specific folder>
useful to change to mentioned directory
cd ..
useful to change back to parent directory, one level down
mkdir <dir_name>
creates a new directory with given name <dir_name>
mkdir -p <dir1/dir2/dir3>
creates the directory structure, which fails to do without -p option
rm -r <dir1/dir2/dir3>
removes the complete directory structure, regardless of number of internal directories, which fails to remove without -r option.
I would recommend different installation process on mac rather than installing the nodejs using brew command line.
../ docs/ 05-Jan-2017 21:30 - win-x64/ 05-Jan-2017 21:04 - win-x86/ 05-Jan-2017 21:23 - SHASUMS256.txt 05-Jan-2017 23:33 3865 SHASUMS256.txt.asc 05-Jan-2017 23:33 4369 SHASUMS256.txt.sig 05-Jan-2017 23:33 287 node-v6.9.4-aix-ppc64.tar.gz 05-Jan-2017 21:08 17397618 node-v6.9.4-darwin-x64.tar.gz 05-Jan-2017 21:29 12041778 node-v6.9.4-darwin-x64.tar.xz 05-Jan-2017 21:30 8316788 node-v6.9.4-headers.tar.gz 05-Jan-2017 21:35 476961 node-v6.9.4-headers.tar.xz 05-Jan-2017 21:35 346992 node-v6.9.4-linux-arm64.tar.gz 05-Jan-2017 20:52 13216193 node-v6.9.4-linux-arm64.tar.xz 05-Jan-2017 20:52 8413528 node-v6.9.4-linux-armv6l.tar.gz 05-Jan-2017 21:27 13058040 node-v6.9.4-linux-armv6l.tar.xz 05-Jan-2017 21:33 8291812 node-v6.9.4-linux-armv7l.tar.gz 05-Jan-2017 21:03 13040155 node-v6.9.4-linux-armv7l.tar.xz 05-Jan-2017 21:05 8304476 node-v6.9.4-linux-ppc64.tar.gz 05-Jan-2017 20:49 13994448 node-v6.9.4-linux-ppc64.tar.xz 05-Jan-2017 20:50 8522364 node-v6.9.4-linux-ppc64le.tar.gz 05-Jan-2017 20:48 13749594 node-v6.9.4-linux-ppc64le.tar.xz 05-Jan-2017 20:48 8655784 node-v6.9.4-linux-s390x.tar.gz 05-Jan-2017 20:46 14201628 node-v6.9.4-linux-s390x.tar.xz 05-Jan-2017 20:46 8933844 node-v6.9.4-linux-x64.tar.gz 05-Jan-2017 20:50 13928019 node-v6.9.4-linux-x64.tar.xz 05-Jan-2017 20:51 9348020 node-v6.9.4-linux-x86.tar.gz 05-Jan-2017 20:52 13411488 node-v6.9.4-linux-x86.tar.xz 05-Jan-2017 20:53 8935060 node-v6.9.4-sunos-x64.tar.gz 05-Jan-2017 21:10 14679106 node-v6.9.4-sunos-x64.tar.xz 05-Jan-2017 21:11 9442348 node-v6.9.4-sunos-x86.tar.gz 05-Jan-2017 23:31 13617149 node-v6.9.4-sunos-x86.tar.xz 05-Jan-2017 23:31 8765124 node-v6.9.4-win-x64.7z 05-Jan-2017 21:05 6868142 node-v6.9.4-win-x64.zip 05-Jan-2017 21:05 12239442 node-v6.9.4-win-x86.7z 05-Jan-2017 21:24 5982803 node-v6.9.4-win-x86.zip 05-Jan-2017 21:24 11026266 node-v6.9.4-x64.msi 05-Jan-2017 21:06 12791808 node-v6.9.4-x86.msi 05-Jan-2017 21:24 11489280 node-v6.9.4.pkg 05-Jan-2017 21:24 15526075 node-v6.9.4.tar.gz 05-Jan-2017 21:31 26379683 node-v6.9.4.tar.xz 05-Jan-2017 21:32 15519264
Just a quick introduction, we can create Kafka Topic. using command line utility and also using programming APIs.
This post shows command line utility which help us in creating the Kafka Topic.
For your information, under /bin folder of Kafka installation, there are many Kafka Utilities available for various uses, one of which to kafka-topics.sh helps to create topics, describe topics & other topic related stuff.
Before executing the topic command as below, ensure your Zookeeper Server and Kafka Cluster (same order - start Zookeeper 1st then Kafka Cluster service next) has already been started.
Lets say Kafka cluster is having one node with its Kafka Service node running at a default port 9092 & we are trying to create a topic named error-events
$ bin/kafka-topics.sh --create --topic error-events --bootstrap-server localhost:9092Console Output:
Topic created.
To ensure the topic creation, we can describe the topic:
$ bin/kafka-topics.sh --describe --topic error-events --bootstrap-server localhost:9092
Topic:error-events PartitionCount:1 ReplicationFactor:1 Configs:
Topic: error-events Partition: 0 Leader: 0 Replicas: 0 Isr: 0
It prints all the information related to given kafka topic - like partition count, replicator factor, replicas, in-sync replicas (Isr), Leader ...
First Way:
You can use mvn option -U : which forces the maven to download the dependency into local repository (.m2 folder)
Eg:
mvn clean package -U
Second Way:
Use purge-local-repository
Eg:
mvn dependency:purge-local-repository clean package
Third Way:
You can manually directly delete the .m2 folder in computers home directory (like in linux - mac, ~)
you can use GSON annotation @SerializedName annotation for using a different names for serialized fields.
Maven Dependency Needed:
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.6</version>
</dependency> For Example:
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import com.google.gson.annotations.SerializedName;
public class GsonExposeAnnotationTest {
public static void main(String[] args) {
Student student = new Student("Dinesh", "ROLL123", "Hyderabad - 500081");
Gson gson = new GsonBuilder().setPrettyPrinting().create();
System.out.println(gson.toJson(student));
}
}
class Student {
@SerializedName("student-name")
String name;
@SerializedName("student-rollno")
String rollNo;
@SerializedName("student-address")
String address;
public Student(String name, String rollNo, String address) {
this.name = name;
this.rollNo = rollNo;
this.address = address;
}
} Output: Serialised output - fields name are different than in POJO { "student-name": "Dinesh", "student-rollno": "ROLL123", "student-address": "Hyderabad - 500081" }
You can use @Expose annotation to exclude or include the fields using GSON library. By default, @Expose serialize & deserialize attributes have true. So you can mention @Expose directly, which intends @Expose(serialize = true, deserialize = true)
Maven Dependency Need: Add this dependency in your pom.xml file.
For Example:<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.6</version>
</dependency>
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import com.google.gson.annotations.Expose;
public class GsonExposeAnnotationTest {
public static void main(String[] args) {
Customer customer = new Customer("Dinesh", "CID124", new Integer(12));
Gson gson = new GsonBuilder().setPrettyPrinting().create();
System.out.println(gson.toJson(customer));
gson = new GsonBuilder().setPrettyPrinting().excludeFieldsWithoutExposeAnnotation().create();
System.out.println(gson.toJson(customer));
}
}
class Customer {
@Expose(serialize = true, deserialize = true)
public String customerName;
@Expose(serialize = true, deserialize = true)
public String customerId;
@Expose(serialize = false, deserialize = false)
public Integer totalAssets;
public Customer(String customerName, String customerId, Integer totalAssets) {
this.customerName = customerName;
this.customerId = customerId;
this.totalAssets = totalAssets;
}
}
Input:{"customerName": "Dinesh", "customerId": "CID124", "totalAssets": 12 }output: Only exposed fields are in the seen in the output after serialisation.{ "customerName": "Dinesh", "customerId": "CID124" }