Skip to content
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.

Commit

Permalink
Merge pull request #430 from maryannxue/master
Browse files Browse the repository at this point in the history
Issue #20 - hashjoin implementation
  • Loading branch information
jtaylor-sfdc committed Oct 3, 2013
2 parents 2b4e034 + 7719a93 commit 1b506d2
Show file tree
Hide file tree
Showing 219 changed files with 10,043 additions and 2,699 deletions.
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
<h1>Phoenix: A SQL skin over HBase<br />
<em><sup><sup>'We put the SQL back in NoSQL'</sup></sup></em></h1>
![logo](http://forcedotcom.github.com/phoenix/images/logo.jpg)

Phoenix is a SQL skin over HBase, delivered as a client-embedded JDBC driver, powering the HBase use cases at Salesforce.com. Phoenix targets low-latency queries (milliseconds), as opposed to batch operation via map/reduce. To see what's supported, go to our [language reference guide](http://forcedotcom.github.com/phoenix/), read more on our [wiki](https://github.com/forcedotcom/phoenix/wiki), and download it [here](https://github.com/forcedotcom/phoenix/wiki/Download).
## Mission
Become the standard means of accessing HBase data through a well-defined, industry standard API.

## Quick Start
Tired of reading already and just want to get started? Jump over to our quick start guide [here](https://github.com/forcedotcom/phoenix/wiki/Phoenix-in-15-minutes-or-less) or map to your existing HBase tables as described [here](https://github.com/forcedotcom/phoenix/wiki#wiki-mapping) and start querying now.
Tired of reading already and just want to get started? Listen to the Phoenix talks from [Hadoop Summit 2013](http://www.youtube.com/watch?v=YHsHdQ08trg) and [HBaseConn 2013](http://www.cloudera.com/content/cloudera/en/resources/library/hbasecon/hbasecon-2013--how-and-why-phoenix-puts-the-sql-back-into-nosql-video.html), check out our [FAQs](https://github.com/forcedotcom/phoenix/wiki/F.A.Q.), and jump over to our quick start guide [here](https://github.com/forcedotcom/phoenix/wiki/Phoenix-in-15-minutes-or-less) or map to your existing HBase tables as described [here](https://github.com/forcedotcom/phoenix/wiki#wiki-mapping) and start querying now.

## How It Works ##

Expand Down Expand Up @@ -42,6 +42,7 @@ Alternatively, you can build it yourself using maven by following these [build i


## Getting Started ##
Wanted to get started quickly? Take a look at our [FAQs](https://github.com/forcedotcom/phoenix/wiki/F.A.Q.) and take our quick start guide [here](https://github.com/forcedotcom/phoenix/wiki/Phoenix-in-15-minutes-or-less).

<h4>Command Line</h4>

Expand Down Expand Up @@ -108,7 +109,7 @@ Currently, Phoenix hosts its own maven repository in github. This is done for co
<dependency>
<groupId>com.salesforce</groupId>
<artifactId>phoenix</artifactId>
<version>2.0.0</version>
<version>2.0.2</version>
</dependency>
...
</dependencies>
Expand Down
2 changes: 1 addition & 1 deletion build.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Building Phoenix
================

Phoenix uses Maven (3.X) to build all its necessary resources.
Phoenix uses Maven (3.X) to build all its necessary resources.

## Building from source
=======================
Expand Down
291 changes: 291 additions & 0 deletions dev/PhoenixCodeTemplate.xml

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions docs/phoenix.csv
Original file line number Diff line number Diff line change
Expand Up @@ -855,14 +855,14 @@ PERCENTILE_DISC is an inverse distribution function that assumes a discrete dist
PERCENTILE_DISC( 0.9 ) WITHIN GROUP (ORDER BY X DESC)
"

"Functions (Aggregate)","PERCENTILE_RANK","
PERCENTILE_RANK( { numeric } ) WITHIN GROUP (ORDER BY { numericTerm } { ASC | DESC } )
"Functions (Aggregate)","PERCENT_RANK","
PERCENT_RANK( { numeric } ) WITHIN GROUP (ORDER BY { numericTerm } { ASC | DESC } )
","
The percentile rank for a hypothetical value, if inserted into the column.
Aggregates are only allowed in select statements.
The returned value is of decimal data type.
","
PERCENTILE_RANK( 100 ) WITHIN GROUP (ORDER BY X ASC)
PERCENT_RANK( 100 ) WITHIN GROUP (ORDER BY X ASC)
"

"Functions (Aggregate)","STDDEV_POP","
Expand Down
6 changes: 3 additions & 3 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>com.salesforce</groupId>
<artifactId>phoenix</artifactId>
<version>2.1.0-SNAPSHOT</version>
<version>2.2.0-SNAPSHOT</version>
<name>Phoenix</name>
<description>A SQL layer over HBase</description>

Expand Down Expand Up @@ -63,7 +63,7 @@
<test.output.tofile>true</test.output.tofile>

<!-- Dependency versions -->
<hbase.version>0.94.10</hbase.version>
<hbase.version>0.94.12</hbase.version>
<commons-cli.version>1.2</commons-cli.version>
<hadoop.version>1.0.4</hadoop.version>
<pig.version>0.11.0</pig.version>
Expand Down Expand Up @@ -313,7 +313,7 @@
<artifactId>maven-surefire-plugin</artifactId>
<version>2.13</version>
<configuration>
<argLine>-Xmx2500m</argLine>
<argLine>-enableassertions -Xmx2500m -Djava.security.egd=file:/dev/./urandom</argLine>
<redirectTestOutputToFile>${test.output.tofile}</redirectTestOutputToFile>
</configuration>
</plugin>
Expand Down
34 changes: 12 additions & 22 deletions src/main/antlr3/PhoenixSQL.g
Original file line number Diff line number Diff line change
Expand Up @@ -596,19 +596,13 @@ parseOrderByField returns [OrderByNode ret]
;

parseFrom returns [List<TableNode> ret]
: l=table_refs { $ret = l; }
| l=join_specs { $ret = l; }
;

table_refs returns [List<TableNode> ret]
@init{ret = new ArrayList<TableNode>(4); }
: t=table_ref {$ret.add(t);}
(COMMA t=table_ref {$ret.add(t);} )*
: t=table_ref {$ret.add(t);} (s=sub_table_ref { $ret.add(s); })*
;

// parse a field, if it might be a bind name.
named_table returns [NamedTableNode ret]
: t=from_table_name (LPAREN cdefs=dyn_column_defs RPAREN)? { $ret = factory.namedTable(null,t,cdefs); }
sub_table_ref returns [TableNode ret]
: COMMA t=table_ref { $ret = t; }
| t=join_spec { $ret = t; }
;

table_ref returns [TableNode ret]
Expand All @@ -617,12 +611,8 @@ table_ref returns [TableNode ret]
| LPAREN s=select_node RPAREN ((AS)? alias=identifier)? { $ret = factory.subselect(alias, s); }
;

join_specs returns [List<TableNode> ret]
: t=named_table {$ret.add(t);} (s=join_spec { $ret.add(s); })+
;

join_spec returns [JoinTableNode ret]
: j=join_type JOIN t=named_table ON e=condition { $ret = factory.join(null, t, e, j); }
join_spec returns [TableNode ret]
: j=join_type JOIN t=table_ref ON e=condition { $ret = factory.join(j, e, t); }
;

join_type returns [JoinTableNode.JoinType ret]
Expand Down Expand Up @@ -655,13 +645,12 @@ condition_and returns [ParseNode ret]

// NOT or parenthesis
condition_not returns [ParseNode ret]
: ( boolean_expr ) => e=boolean_expr { $ret = e; }
| NOT e=boolean_expr { $ret = factory.not(e); }
| LPAREN e=condition RPAREN { $ret = e; }
: (NOT? boolean_expr ) => n=NOT? e=boolean_expr { $ret = n == null ? e : factory.not(e); }
| n=NOT? LPAREN e=condition RPAREN { $ret = n == null ? e : factory.not(e); }
;

boolean_expr returns [ParseNode ret]
: (l=expression ((EQ r=expression {$ret = factory.equal(l,r); } )
: l=expression ((EQ r=expression {$ret = factory.equal(l,r); } )
| ((NOEQ1 | NOEQ2) r=expression {$ret = factory.notEqual(l,r); } )
| (LT r=expression {$ret = factory.lt(l,r); } )
| (GT r=expression {$ret = factory.gt(l,r); } )
Expand All @@ -675,7 +664,8 @@ boolean_expr returns [ParseNode ret]
| (LPAREN r=select_expression RPAREN {$ret = factory.in(l,r,n!=null);} )
| (v=values {List<ParseNode> il = new ArrayList<ParseNode>(v.size() + 1); il.add(l); il.addAll(v); $ret = factory.inList(il,n!=null);})
)))
))))
))
| { $ret = l; } )
;

bind_expression returns [BindParseNode ret]
Expand Down
76 changes: 76 additions & 0 deletions src/main/java/com/salesforce/hbase/index/CapturingAbortable.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
/*******************************************************************************
* Copyright (c) 2013, Salesforce.com, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* Neither the name of Salesforce.com nor the names of its contributors may
* be used to endorse or promote products derived from this software without
* specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
******************************************************************************/
package com.salesforce.hbase.index;

import org.apache.hadoop.hbase.Abortable;

/**
* {@link Abortable} that can rethrow the cause of the abort.
*/
public class CapturingAbortable implements Abortable {

private Abortable delegate;
private Throwable cause;
private String why;

public CapturingAbortable(Abortable delegate) {
this.delegate = delegate;
}

@Override
public void abort(String why, Throwable e) {
if (delegate.isAborted()) {
return;
}
this.why = why;
this.cause = e;
delegate.abort(why, e);

}

@Override
public boolean isAborted() {
return delegate.isAborted();
}

/**
* Throw the cause of the abort, if <tt>this</tt> was aborted. If there was an exception causing
* the abort, re-throws that. Otherwise, just throws a generic {@link Exception} with the reason
* why the abort was caused.
* @throws Throwable the cause of the abort.
*/
public void throwCauseIfAborted() throws Throwable {
if (!this.isAborted()) {
return;
}
if (cause == null) {
throw new Exception(why);
}
throw cause;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@
import java.io.IOException;
import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HRegionInfo;
import org.apache.hadoop.hbase.HTableDescriptor;
Expand Down Expand Up @@ -75,6 +77,7 @@
*/
public class IndexLogRollSynchronizer implements WALActionsListener {

private static final Log LOG = LogFactory.getLog(IndexLogRollSynchronizer.class);
private WriteLock logArchiveLock;

public IndexLogRollSynchronizer(WriteLock logWriteLock){
Expand All @@ -85,18 +88,21 @@ public IndexLogRollSynchronizer(WriteLock logWriteLock){
@Override
public void preLogArchive(Path oldPath, Path newPath) throws IOException {
//take a write lock on the index - any pending index updates will complete before we finish
LOG.debug("Taking INDEX_UPDATE writelock");
logArchiveLock.lock();
}

@Override
public void postLogArchive(Path oldPath, Path newPath) throws IOException {
// done archiving the logs, any WAL updates will be replayed on failure
LOG.debug("Releasing INDEX_UPDATE writelock");
logArchiveLock.unlock();
}

@Override
public void logCloseRequested() {
// don't care- before this is called, all the HRegions are closed, so we can't get any new requests and all pending request can finish before the WAL closes.
// don't care- before this is called, all the HRegions are closed, so we can't get any new
// requests and all pending request can finish before the WAL closes.
}

@Override
Expand Down
Loading

0 comments on commit 1b506d2

Please sign in to comment.